id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
35699212
https://en.wikipedia.org/wiki/Product%20cost%20management
Product cost management
Product cost management (PCM) is a set of tools, processes, methods, and culture used by firms who develop and manufacture products to ensure that a product meets its profit (or cost) target. Scope There is not an agreed-upon definition for product cost management or an agreed scope for what it encompasses. Some people argue that PCM is a synonym for target costing. However, others argue that PCM is different, because target costing is a pricing method, whereas, PCM is focused on the maximum profit or minimum cost of a product, regardless of the price at which the product is sold to the end customer. Some analysts seem to equate PCM to design-to-cost. Some practitioners of PCM are mostly concerned with the cost of the product up until the point that the customer takes delivery (e.g. manufacturing costs + logistics costs) or the total cost of acquisition. They seek to launch products that meet profit targets at launch rather than reducing the costs of a product after production. Other people believe that PCM extends to a total cost of ownership or lifecycle costing (Manufacturing + Logistics + operational costs + disposal). Depending on the practitioner, PCM may include any combination of organizational or /cultural change, processes, team roles, and tools. Many believe that PCM must encompass all four aspects to be successful and have shown how the four parts work together. Processes and activities Depending on the scope the practitioner assigns to PCM, it may include one or more of the following processes. Change management and building a cost/profit-conscious culture Building cost management into the Product Lifecycle Management processes DFM – Design for Manufacturing DFA – Design for assembly DTC – Design to Cost DFP – Design for Procurement VA/VE – Value Analysis / Value engineering DFSS – Design for Six Sigma Cost targeting Should Cost / Price Make Buy Capital asset justification Commodity Pricing Spend analysis Cost-Volume-Profit Analysis Tools Initially, PCM was done with pencil and paper. However, with the advent of computers, companies started to create internal software for predicting, controlling, minimizing, recording, and sharing product costs. With the invention of spreadsheets, PCM tools got a major boost in ease of use and adoption. In the late 1970s, specialized third-party software was developed that could do some of the activities included in PCM. Today, there are several tools that directly or indirectly promote themselves as “Product Cost Management” software solutions. Some of these tools also state that they can help users with problems of target costing, as well. However, despite the creation of third-party tools, spreadsheets, specifically Microsoft Excel may still be, overwhelmingly, the most popular tool for PCM practitioners. Is PCM a software category? In the mid-2000s, there was some discussion of whether PCM (sometimes referred to as Enterprise Cost Management) would become a separate software category or be part of one or more of the existing enterprise software categories of enterprise resource planning (ERP), product lifecycle management (PLM), or supply chain management (SCM). The vendors who make specialized PCM software have not yet gained the revenue necessary for the major industry analysts to proclaim PCM as its own category. However, there has been at least one analyst report focusing on product cost analytics. It is unknown whether PCM will become part of a bigger enterprise software category. At least one of the major ERP vendors and two of the major PLM vendors have products that they bill as Product Cost Management or analytics solutions. Purposes The strategic purpose of PCM has been to maximize the profit of products through making a product the most cost efficient. Tactically, this has been accomplished by using the various PCM techniques and tools discussed above in a predictive way. That is, the tools are used to estimate a cost that is used as an absolute value for what a cost should be, or to relatively evaluate the cost of one design or manufacturing process or supplier versus another. In 2012, some experts in the PCM field have advocated that the purpose of PCM is not only to predict the most accurate cost, but also as a tool for leverage in negotiation. References Product management
7207506
https://en.wikipedia.org/wiki/Larry%20Constantine
Larry Constantine
Larry LeRoy Constantine (born 1943) is an American software engineer, professor in the Center for Exact Sciences and Engineering at the University of Madeira Portugal, and considered one of the pioneers of computing. He has contributed numerous concepts and techniques forming the foundations of modern practice in software engineering and applications design and development. Biography Constantine grew up in Anoka, Minnesota, and graduated from Anoka High School in 1961 after being active in debate and thespians as well as other extra curricular activities. He was named "Most Likely to Succeed" by his classmates. Constantine received an S.B. in management from the MIT Sloan School of Management in 1967 with a specialization in information systems and psychology. He received a certificate in family therapy 1973 from the Boston Family Institute, two-year post graduate training program. Constantine started his working career as a technical aid/programmer at M.I.T. Laboratory for Nuclear Science in 1963. From 1963 to 1966 he was a staff consultant and programmer/analyst at C E I R, Inc. From 1966 to 1968 he was president of the Information & Systems Institute, Inc. In 1967 also he became a postgraduate program instructor at the Wharton School of Business, University of Pennsylvania. From 1968 to 1972 he was a faculty member of the I.B.M. Systems Research Institute. In 1973 he became director of research, Concord, Massachusetts Family Service Society. From 1973 to 1980 he was assistant clinical professor of psychiatry, Tufts University, School of Medicine. Until 1987 he was assistant professor of human development and family studies (adjunct), University of Connecticut. From 1984 to 1986 he was also clinical supervisor, adolescent and family intervention, LUK, Inc., Fitchburg, Massachusetts. From 1987 to 1993 he also worked as independent consultant. He remained a chief scientist, principal consultant, Constantine & Lockwood, Ltd since 1993. From 1994-1999 he was professor of information technology, University of Technology Sydney, Australia. Since 2006 has been a professor in the Mathematics and Engineering Department at the University of Madeira Portugal, where he headed the Laboratory for Usage-centered Software Engineering (LabUSE), A former research center dedicated to study the human aspects of modern software engineering before becoming Institute Fellow at the Madeira Interactive Technologies Institute in 2010. In 1999 Constantine received the Jolt Award for Product Excellence, best book of 1999 for his "Software for Use". In 2001 he received the Platinum Award of Excellence (first place), Performance-Centered Design Competition 2001: Siemens AG, STEP-7 Lite. In 2006 he was recognized as a Distinguished Engineer by the Association for Computing Machinery, and in 2007 he was made a Fellow of the ACM. He is the 2009 recipient of the Stevens Award for "outstanding contributions to the literature or practice of methods for software and systems development." He received a Simon Rockower Award in 2011 from the American Jewish Press Association. Work Constantine specializes in the human side of software development. His published work includes the influential classic text, Structured Design, written with Ed Yourdon, and the award-winning "Software for Use", written with Lucy Lockwood. His contributions to the practice of software development began in 1968 with his pioneering work in "Modular programming" concepts. Constantine was the primary force behind the discipline of Structured Design, in his book of the same name. The key features of Structured Design, such as Structure Chart, the Data flow diagram are all commonly used and taught worldwide. Structured design Constantine, who learned programming at the Massachusetts Institute of Technology, began his professional career in computers with a summer job at Scientific Computing, at the time a subsidiary of Control Data Corporation, in Minneapolis. He went on to full-time work at MIT’s Laboratory for Nuclear Science, where he wrote routines for analyzing spark chamber photographs, and then to C-E-I-R, Inc., where he worked on economics simulations, business applications, project management tools, and programming languages. While still an undergraduate at MIT he began work on what was to become structured design, formed his first consulting company, and taught in a postgraduate program at the University of Pennsylvania Wharton School. The core of structured design, including structure charts and coupling and cohesion metrics, was substantially complete by 1968, when it was presented at the National Symposium on Modular Programming. He joined the faculty of IBM’s Systems Research Institute the same year, where he taught for four years and further refined his concepts. As part of structured design, Constantine developed the concepts of cohesion (the degree to which the internal contents of a module are related) and coupling (the degree to which a module depends upon other modules). These two concepts have been influential in the development of software engineering, and stand alone from structured design as significant contributions in their own right. They have proved foundational in areas ranging from software design to software metrics, and indeed have passed into the vernacular of the discipline. Constantine also developed methodologies that combine human-computer-interaction design with software engineering. One methodology, usage-centered design, is the topic of his 1999 book with Lucy Lockwood, "Software For Use". This is a third significant contribution to the field, being both well used in professional practice and the subject of academic study, and taught in a number of human-computer interface courses and universities around the world. His work on human-computer interaction was influential for techniques like essential use cases and usage-centered design, which are widely used for building interactive software systems. Family therapy Constantine trained under family therapy pioneers David Kantor and Fred and Bunny Duhl at the Boston Family Institute, completing a two-year postgraduate certificate program in 1973. From 1973 to 1980 he was an assistant clinical professor of psychiatry in the Tufts University School of Medicine training family therapists and supervising trainees at Boston State Hospital. He became a Licensed Clinical Social Worker and later a Licensed Marriage and Family Therapist in Massachusetts and was designated an approved supervisor by the American Association for Marriage and Family Therapy. His contributions to theory and research in family therapy and human systems theory were summarized in Family Paradigms (Guilford Press, 1986), a book heralded at the time as “one of the finest theoretical books yet published in the family therapy field” and “among the most significant developments of the decade.” This work has also seen application in organization development. He and his wife at the time, Joan Constantine, also researched and practiced group marriage in the 1970s. They created the Family Tree organization to promote healthy non-monogamous families. They collaboratively authored a book on the subject in 1974, Group Marriage: A Study of Contemporary Multilateral Marriage (Collier Books, 1974). Patents US Patents: 7010753 Anticipating drop acceptance indication; 7055105 Drop-enabled tabbed dialog; 8161026 Inexact date entry Music Although he played piano, saxophone, and violin as a child, Constantine gave up instrumental performance for singing. He sang with the award-winning Burtones ensemble while a student at MIT, is a twelve-year veteran and alum of the semi-professional Zamir Chorale of Boston, and is a member of the Zachor Choral Ensemble, a Boston-based group dedicated to keeping alive the memory of the Holocaust through music. Constantine is also a composer with several major works to his credit. He studied theory and composition under George Litterst and Stephan Peisch at the New England Conservatory. His first commissioned work, Concerto Grosso No. 1 in G-minor, “Serendipity,” was premiered by the Rockford (Illinois) Pops Orchestra on 9 July 1981. His choral work, “No Hidden Meanings,” based on a text by psychologist Sheldon Kopp, was commissioned by the American Humanist Association and premiered at MIT’s Kresge Auditorium, 20 June 1982. His choral setting of the traditional Shehechiyanu blessing was premiered April 18, 2010 by HaShirim at the groundbreaking for Temple Ahavat Achim in Gloucester, Massachusetts. Fiction Constantine, an active (professional) member of the Science Fiction and Fantasy Writers of America, is the author of numerous short stories, mostly published under several pseudonyms. He edited Infinite Loop, (Miller Freeman Books, 1993), an anthology of science fiction by writers in the computer field described in the Midwest Book Review as "quite simply one of the best anthologies to appear in recent years.” Writing under the pen name Lior Samson, Constantine is the author of several critically acclaimed political thrillers, including Bashert, The Dome, Web Games, The Rosen Singularity, Chipset, Gasline, and Flight Track. His other fiction includes Avalanche Warning (Gesher Press, 2013), The Four-Color Puzzle (Gesher Press, 2013), and Requisite Variety: Collected Short Fiction (Gesher Press, 2011). His first novel, Bashert, was included in a time capsule at MIT by the class of 1967 for its 50th reunion. The time capsule is slated to be opened in 2067. Publications Constantine has more than 200 published papers to his credit, as well as 22 books. A selection: 1974. Group marriage: A study of contemporary multilateral marriage. With Joan Constantine. Collier Books, 1974. 1975. Structured Design. With Ed Yourdon. Yourdon Press. 1981. Children and Sex: New Findings, New Perspectives. (ed.) with Floyd Martinson. Little Brown & Co (T). 1986. Family Paradigms: The Practice of Theory in Family Therapy. Guilford Press. 1995. Constantine on Peopleware. Yourdon Press Computing Series. 1999. Software for Use: A Practical Guide to the Essential Models and Methods of Usage-Centered Design. With Lucy Lockwood. Reading, MA: Addison-Wesley. 2001. The Peopleware Papers: Notes on the Human Side of Software. NJ: Prentice Hall. 2001. Beyond Chaos: The Expert Edge in Managing Software Development. (ed.). Boston: Addison-Wesley. 2002. ''The Unified Process Transition and Production Phases. (ed.) with Scott W. Ambler. CMP Books, Lawrence 2002, . References External links Larry.Constantine homepage University of Madeira. (in Portuguese). Constantine & Lockwood Ltd. Larry Constantine's website. American technology writers American software engineers MIT Sloan School of Management alumni Fellows of the Association for Computing Machinery Living people 1943 births
2471464
https://en.wikipedia.org/wiki/Intellisync
Intellisync
Intellisync Corporation was a provider of data synchronization software for mobile devices, such as mobile phones and personal digital assistants (PDAs). The company was acquired in 2006 by Nokia. History Puma Technology (known as Pumatech) was based in San Jose, California. It was founded in August 1993 by Princeton classmates Bradley A. Rowe and Stephen A. Nicol. The company was a pioneer in the development of mobile device data synchronization software in the early day of mobile device computing, with 36 total U.S. patents awarded. Three rounds of venture capital included investors Greylock Partners CSK Venture Capital, and Intel. In April 1996, Pumatech acquired IntelliLink Corporation, based in Nahsua, New Hampshire for $3.5 million. It announced an initial public offering on the NASDAQ on 6 December 1996. It raised about $37 million and was traded under the symbol PUMA. Pumatech acquired SoftMagic in July 1998, ProxiNet in October 1999, NetMind in February 2000, Dry Creek Software in July 2000, and The Windward Group in October 2000. In March 2003, it acquired the Starfish Software division of Motorola. Pumatech acquired the Alpharetta, Georgia based Synchrologic in late 2003 and renamed itself Intellisync Corporation (after its IntelliSync product family) in 2004. It was traded under the symbol SYNC. On 31 January 2006, stockholder approval was secured for Intellisync to be acquired by Nokia. On 10 February 2006, Nokia completed its acquisition. In November 2006 Nokia announced integration with Exchange ActiveSync and its Eseries products. After the Nokia acquisition, the Intellisync headquarters in San Jose closed and all employees moved to the Nokia office in Mountain View, California. The company had development offices in Alpharetta, Georgia, Bulgaria, New Delhi, Tokyo and Cluj-Napoca. Nokia announced that the IntelliSync Desktop product was discontinued and the last date available to order the product was 19 July 2008. Product support was provided through 19 July 2010. On 29 September 2008 Nokia announced it planned to cease developing or marketing its own behind-the-firewall business mobility software(Intellisync Mobility Product Suite). The appropriate technologies and expertise would reallocate to Nokia's consumer push e-mail service. Nokia said it would integrate devices with software from vendors such as Microsoft, IBM, Cisco Systems and others. Products Pumatech's first product, released in 1997, was called TranXit. TranXit provided automatic file synchronization between two Windows-based PCs, directly competing with a then popular product called LapLink from Traveling Software, Inc. While LapLink was predominantly sold as a boxed retail product, Pumatech marketed TranXit directly to PC manufacturers who pre-installed the software on their systems. The company signed license agreements with IBM (ThinkPad), Compaq, Toshiba, Acer, Canon, NEC, Epson, and approximately 20 other PC manufacturers. After the Intellilink acquisition, the company broadened its focus into mobile devices, introducing The Intellisync Mobile Suite software designed for individuals, corporations and wireless carriers. It included four modules, that could be installed independently or in any combination: wireless email, systems/device management, file sync and application sync. A specific application reverse proxy was sold as Intellisync Secure Gateway, for more secure firewall configuration. The intent is to provide database synchronization with the company's application sync product; email and PIM synchronization with wireless email, and mobile device management and static file distribution with device management/file sync. These products support synchronization with a corporation's Microsoft Exchange Server, Domino mail servers or Novell GroupWise, as well as POP and IMAP mail support. The company originally only provided software for Windows-based computers, Palm Devices, handheld PCs, and pocket PCs, but expanded into supporting Symbian, BREW, and other mobile devices. While the company originally marketed its product to large businesses (such as Boeing, Nintendo and the United States Military), it began rebranding its product to be distributed by wireless carriers as a revenue enhancing service to individual consumers. Wireless providers partnered with Intellisync included Verizon and Eurotel. Yahoo used Intellisync products, as did Research in Motion (RIM) with its popular Blackberry family of personal communicators. AutoSync Yahoo is a product written by Intellisync to synchronize Yahoo contacts, notes, calendars and tasks with Outlook Express and MS Outlook. References External links http://www.intellisync.com Companies based in San Jose, California Nokia assets Defunct software companies
1793622
https://en.wikipedia.org/wiki/Shogo%3A%20Mobile%20Armor%20Division
Shogo: Mobile Armor Division
is a Mecha first-person shooter video game released by Monolith Productions in 1998. It was the first game to use Monolith's flagship Lithtech engine. As well as performing missions on foot like other conventional FPS games, the game also allows the player to pilot a large mech. Gameplay Shogo features a mix of both standard on-foot first person shooter action, and combat with anime-style bipedal mechs. Unlike mech simulator games such as the MechWarrior series, the mechs in Shogo are controlled essentially the same as in first-person shooter games. An inherent feature of the combat system in Shogo is the possibility of critical hits, whereby attacking an enemy will occasionally bring about a health bonus for the player while the enemy in question loses more health than usual from the weapon used. However, enemy characters are also capable of scoring critical hits on the player. Plot Players take the role of Sanjuro Makabe, a Mobile Combat Armor (MCA) pilot and a commander in the United Corporate Authority (UCA) army, during a brutal war for the planet Cronus and its precious liquid reactant, kato. Players must locate and assassinate a rebel leader known only as Gabriel. Prior to the game's first level, Sanjuro had lost his brother, Toshiro; his best friend, Baku; and his girlfriend, Kura, during the war. He is now driven by revenge and his romantic relationship with Kathryn, Kura's sister; in Sanjuro's words, "It's kinda complicated." At two pivotal points in the game, the player also has the opportunity to make a crucial decision, which can alter the game's ending. While the first decision is almost purely a narrative decision, the second decision actually determines who the player will be facing the rest of the game and how the game will end. Development and release Shogo was originally known as Riot: Mobile Armor. It has heavy influences from Japanese animation, particularly Patlabor and Appleseed and the real robot mecha genre. The game's lead designer Craig Hubbard expressed that Shogo "(although critically successful) fell embarrassingly short of original design goals", and "it is a grim reminder of the perils of wild optimism and unchecked ambition" exercised by the relatively small development team. According to Hubbard, "The whole project was characterized by challenges. We had issues with planning, prioritization, ambition, scope, staffing, inexperience (including my own), and just about everything that can go wrong on a project. I think what saved the game was that we realized about six months before our ship date that there was no way we could make the game great, so we just focused on making it fun." This involved the team putting "all [their] energy in making the weapons really fun to use." A later game developed by Monolith ended up becoming The Operative: No One Lives Forever, released in 2000. During the development of that game, it took a long time for Monolith to find a publishing partner. According to Hubbard, during this time, the game that became No One Lives Forever "mutated constantly in order to please prospective producers and marketing departments. The game actually started off as a mission-based, anime-inspired, paramilitary action thriller intended as a spiritual sequel to Shogo and ended up as a 60s spy adventure in the tradition of Our Man Flint and countless other 60s spy movies and shows." (Parts of the initial "paramilitary action thriller" concept evolved into F.E.A.R., another Monolith game, released after the No One Lives Forever series, in 2005.) Cancelled Expansion packs The expansion pack Shugotenshi would have given more insight into Kura's roles. It would have been six or eight levels of Kura fighting and coming to terms with the death of Hank. Some features of that game would have been various body armor for Kura and new enemies and weapons for her. Legacy of the Fallen would have moved away from the fighting of Cronus and taken the player to the remote kato mining facility at Iota-33. It would just show how well organized the Fallen actually were and the weapon capabilities of an Ambed (Advanced Mechanical Biological Engineering Division) team. Legacy of the Fallen was to have an entirely new cast of characters, five new mecha to choose from, six new onfoot weapons, five new mecha weapons, several new enemy aliens, and levels that played out more like Half-Life'''s levels in structure. Ports Shogo was ported to the Amiga PowerPC platform in 2001 by Hyperion Entertainment. Hyperion also made the Macintosh port and the Linux port of Shogo. The game had not sold as well as had hoped, most notably on Linux, despite becoming a best seller on Tux Games. Hyperion has put some of the blame on its publisher Titan Computer and because Linux users were likely to dual boot with Windows. A version for BeOS was also in development in 1999 by Be Inc. Reception Reviews The game received "favorable" reviews, two points shy of "universal acclaim", according to the review aggregation website Metacritic. Next Generation said, "Obviously there are a lot of alternatives in this market, with Half-Life and SiN releasing at the same time, but Shogo has clear merits and stands up on its own. It's an excellent game and it will be a fine contender." Sales Monolith shipped 100,000 units of the game to retailers in the game's debut week, following its launch in early November 1998. However, the game underperformed commercially. It sold roughly 20,000 units in the United States during 1998's Christmas shopping season, a figure that Mark Asher of CNET Gamecenter called "disappointing". Combined with the failure of competitors SiN and Blood II: The Chosen, these numbers led him to speculate that the first-person shooter genre's market size was smaller than commonly believed, as the "only FPS game that has done really well [over the period] is Half-Life." Shogos low sales resulted in the cancellation of its planned expansion pack. Analyzing Shogos performance in his 2003 book Games That Sell!, Mark H. Walker argued that it "never sold as well as it should have" because of Monolith's status as a small publisher. Shelf space for games was allotted based on a market development fund (MDF) system at the time: major retailers charged fees for advertising and endcap shelving, which publishers were required to pay before a game would be stocked. Because larger publishers could afford greater MDF spending than Monolith, Walker believed that Shogo'' "just couldn't get widespread distribution" in mainstream retail stores compared to its competitors. References External links Official announcement for Shugotenshi on web.archive.org Modern port multiplayer servers on www.ShogoServers.com 1998 video games AmigaOS 4 games Commercial video games with freely available source code First-person shooters Linux games LithTech games Classic Mac OS games Video games about mecha Monolith Productions games Video games developed in the United States Video games scored by Daniel Bernstein Video games scored by Guy Whitmore Windows games
13731186
https://en.wikipedia.org/wiki/Entropy%20%28computing%29
Entropy (computing)
In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security. Linux kernel The Linux kernel generates entropy from keyboard timings, mouse movements, and IDE timings and makes the random character data available to other operating system processes through the special files /dev/random and /dev/urandom. This capability was introduced in Linux version 1.3.30. There are some Linux kernel patches allowing one to use more entropy sources. The audio_entropyd project, which is included in some operating systems such as Fedora, allows audio data to be used as an entropy source. Also available are video_entropyd which calculates random data from a video-source and entropybroker which includes these three and can be used to distribute the entropy data to systems not capable of running any of these (e.g. virtual machines). Furthermore, one can use the HAVEGE algorithm through haveged to pool entropy. In some systems, network interrupts can be used as an entropy source as well. OpenBSD kernel OpenBSD has integrated cryptography as one of its main goals and has always worked on increasing its entropy for encryption but also for randomising many parts of the OS, including various internal operations of its kernel. Around 2011, two of the random devices were dropped and linked into a single source as it could produce hundreds of megabytes per second of high quality random data on an average system. This made depletion of random data by userland programs impossible on OpenBSD once enough entropy has initially been gathered. Hurd kernel A driver ported from the Linux kernel has been made available for the Hurd kernel. Solaris /dev/random and /dev/urandom have been available as Sun packages or patches for Solaris since Solaris 2.6, and have been a standard feature since Solaris 9. As of Solaris 10, administrators can remove existing entropy sources or define new ones via the kernel-level cryptographic framework. A 3rd-party kernel module implementing /dev/random is also available for releases dating back to Solaris 2.4. OS/2 There is a software package for OS/2 that allows software processes to retrieve random data. Windows Microsoft Windows releases newer than Windows 95 use CryptoAPI to gather entropy in a similar fashion to Linux kernel's /dev/random. Windows's CryptoAPI uses the binary registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\RNG\Seed to store a seeded value from all of its entropy sources. Because CryptoAPI is closed-source, some free and open source software applications running on the Windows platform use other measures to get randomness. For example, GnuPG, as of version 1.06, uses a variety of sources such as the number of free bytes in memory that combined with a random seed generates desired randomness it needs. Programmers using CAPI can get entropy by calling CAPI's CryptGenRandom(), after properly initializing it. CryptoAPI was deprecated from Windows Vista and higher. New API is called Cryptography API: Next Generation (CNG). Windows's CNG uses the binary registry key HKEY_LOCAL_MACHINE\SYSTEM\RNG\Seed to store a seeded value. Newer version of Windows are able to use a variety of entropy sources: TPM if available and enabled on motherboard Entropy from UEFI interface (if booted from UEFI) RDRAND CPU instruction if available Hardware system clock (RTC) OEM0 ACPI table content Interrupt timings Keyboard timings and Mouse movements Embedded Systems Embedded Systems have difficulty gathering enough entropy as they are often very simple devices with short boot times, and key generation operations that require sufficient entropy are often one of the first things a system may do. Common entropy sources may not exist on these devices, or will not have been active long enough during boot to ensure sufficient entropy exists. Embedded devices often lack rotating disk drives, human interface devices, and even fans, and the network interface, if any, will not have been active for long enough to provide much entropy. Lacking easy access to entropy, some devices may use hard-coded keys to seed random generators, or seed random generators from easily-guessed unique identifiers such as the device's MAC address. A simple study demonstrated the widespread use of weak keys by finding many embedded systems such as routers using the same keys. It was thought that the number of weak keys found would have been far higher if simple and often attacker determinable one-time unique identifiers had not been incorporated into the entropy of some of these systems. (De)centralized systems A true random number generator (TRNG) can be a (de)central service. One example of a centralized system where a random number can be acquired is the randomness beacon service from the National Institute of Standards and Technology. The Cardano platform uses the participants of their decentralized proof-of-stake protocol to generate random numbers. Other systems There are some software packages that allow one to use a userspace process to gather random characters, exactly what /dev/random does, such as EGD, the Entropy Gathering Daemon. Hardware-originated entropy Modern CPUs and hardware often feature integrated generators that can provide high-quality and high-speed entropy to operating systems. On systems based on the Linux kernel, one can read the entropy generated from such a device through /dev/hw_random. However, sometimes /dev/hw_random may be slow; There are some companies manufacturing entropy generation devices, and some of them are shipped with drivers for Linux. On Linux system, one can install the rng-tools package that supports the true random number generators (TRNGs) found in CPUs supporting the RDRAND instruction, Trusted Platform Modules and in some Intel, AMD, or VIA chipsets, effectively increasing the entropy collected into /dev/random and potentially improving the cryptographic potential. This is especially useful on headless systems that have no other sources of entropy. Practical implications System administrators, especially those supervising Internet servers, have to ensure that the server processes will not halt because of entropy depletion. Entropy on servers utilising the Linux kernel, or any other kernel or userspace process that generates entropy from the console and the storage subsystem, is often less than ideal because of the lack of a mouse and keyboard, thus servers have to generate their entropy from a limited set of resources such as IDE timings. The entropy pool size in Linux is viewable through the file /proc/sys/kernel/random/entropy_avail and should generally be at least 2000 bits (out of a maximum of 4096). Entropy changes frequently. Administrators responsible for systems that have low or zero entropy should not attempt to use /dev/urandom as a substitute for /dev/random as this may cause SSL/TLS connections to have lower-grade encryption. Some software systems change their Diffie-Hellman keys often, and this may in some cases help a server to continue functioning normally even with an entropy bottleneck. On servers with low entropy, a process can appear hung when it is waiting for random characters to appear in /dev/random (on Linux-based systems). For example, there was a known problem in Debian that caused exim4 to hang in some cases because of this. Security Entropy sources can be used for keyboard timing attacks. Entropy can affect the cryptography (TLS/SSL) of a server: If a server fails to use a proper source of randomness, the keys generated by the server will be insecure. In some cases a cracker (malicious attacker) can guess some bits of entropy from the output of a pseudorandom number generator (PRNG), and this happens when not enough entropy is introduced into the PRNG. Potential sources Commonly used entropy sources include the mouse, keyboard, and IDE timings, but there are other potential sources. For example, one could collect entropy from the computer's microphone, or by building a sensor to measure the air turbulence inside a disk drive. For Unix/BSD derivatives there exists a USB based solution that utilizes an ARM Cortex CPU for filtering / securing the bit stream generated by two entropy generator sources in the system. CloudFlare use an image feed from a rack of 80 lava lamps as an additional source of entropy. See also Entropy (information theory) Entropy Randomness References External links Overview of entropy and of entropy generators in Linux Pseudorandom number generators
1465669
https://en.wikipedia.org/wiki/QEMU
QEMU
QEMU is a free and open-source emulator. It emulates the machine's processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems. It can interoperate with Kernel-based Virtual Machine (KVM) to run virtual machines at near-native speed. QEMU can also do emulation for user-level processes, allowing applications compiled for one architecture to run on another. Licensing QEMU was written by Fabrice Bellard and is free software, mainly licensed under the GNU General Public License (GPL for short). Various parts are released under the BSD license, GNU Lesser General Public License (LGPL) or other GPL-compatible licenses. Operating modes QEMU has multiple operating modes: User-mode emulation In this mode QEMU runs single Linux or Darwin/macOS programs that were compiled for a different instruction set. System calls are thunked for endianness and for 32/64 bit mismatches. Fast cross-compilation and cross-debugging are the main targets for user-mode emulation. System emulation In this mode QEMU emulates a full computer system, including peripherals. It can be used to provide virtual hosting of several virtual computers on a single computer. QEMU can boot many guest operating systems, including Linux, Solaris, Microsoft Windows, DOS, and BSD; it supports emulating several instruction sets, including x86, MIPS, 32-bit ARMv7, ARMv8, PowerPC, SPARC, ETRAX CRIS and MicroBlaze. KVM Hosting Here QEMU deals with the setting up and migration of KVM images. It is still involved in the emulation of hardware, but the execution of the guest is done by KVM as requested by QEMU. Xen Hosting QEMU is involved only in the emulation of hardware; the execution of the guest is done within Xen and is totally hidden from QEMU. Features QEMU can save and restore the state of the virtual machine with all programs running. Guest operating systems do not need patching in order to run inside QEMU. QEMU supports the emulation of various architectures, including x86, MIPS64 (up to Release 6), SPARC (sun4m and sun4u), ARM (Integrator/CP and Versatile/PB), SuperH, PowerPC (PReP and Power Macintosh), ETRAX CRIS, MicroBlaze, and RISC-V. The virtual machine can interface with many types of physical host hardware, including the user's hard disks, CD-ROM drives, network cards, audio interfaces, and USB devices. USB devices can be completely emulated, or the host's USB devices can be used, although this requires administrator privileges and does not work with all devices. Virtual disk images can be stored in a special format (qcow or qcow2) that only takes up as much disk space as the guest OS actually uses. This way, an emulated 120 GB disk may occupy only a few hundred megabytes on the host. The QCOW2 format also allows the creation of overlay images that record the difference from another (unmodified) base image file. This provides the possibility for reverting the emulated disk's contents to an earlier state. For example, a base image could hold a fresh install of an operating system that is known to work, and the overlay images are used. Should the guest system become unusable (through virus attack, accidental system destruction, etc.), the user can delete the overlay and use an earlier emulated disk image. QEMU can emulate network cards (of different models) which share the host system's connectivity by doing network address translation, effectively allowing the guest to use the same network as the host. The virtual network cards can also connect to network cards of other instances of QEMU or to local TAP interfaces. Network connectivity can also be achieved by bridging a TUN/TAP interface used by QEMU with a non-virtual Ethernet interface on the host OS using the host OS's bridging features. QEMU integrates several services to allow the host and guest systems to communicate; for example, an integrated SMB server and network-port redirection (to allow incoming connections to the virtual machine). It can also boot Linux kernels without a bootloader. QEMU does not depend on the presence of graphical output methods on the host system. Instead, it can allow one to access the screen of the guest OS via an integrated VNC server. It can also use an emulated serial line, without any screen, with applicable operating systems. Simulating multiple CPUs running SMP is possible. QEMU does not require administrative rights to run unless additional kernel modules for improving speed (like KQEMU) are used or certain modes of its network connectivity model are utilized. Tiny Code Generator The Tiny Code Generator (TCG) aims to remove the shortcoming of relying on a particular version of GCC or any compiler, instead incorporating the compiler (code generator) into other tasks performed by QEMU at run time. The whole translation task thus consists of two parts: basic blocks of target code (TBs) being rewritten in TCG ops - a kind of machine-independent intermediate notation, and subsequently this notation being compiled for the host's architecture by TCG. Optional optimisation passes are performed between them, for a just-in-time compiler (JIT) mode. TCG requires dedicated code written to support every architecture it runs on, so that the JIT knows what to translate the TCG ops to. If no dedicated JIT code is available for the architecture, TCG falls back to a slow interpreter mode called TCG Interpreter (TCI). It also requires updating the target code to use TCG ops instead of the old dyngen ops. Starting with QEMU Version 0.10.0, TCG ships with the QEMU stable release. It replaces the dyngen, which relied on GCC 3.x to work. Accelerator KQEMU was a Linux kernel module, also written by Fabrice Bellard, which notably sped up emulation of x86 or x86-64 guests on platforms with the same CPU architecture. This worked by running user mode code (and optionally some kernel code) directly on the host computer's CPU, and by using processor and peripheral emulation only for kernel-mode and real-mode code. KQEMU could execute code from many guest OSes even if the host CPU did not support hardware-assisted virtualization. KQEMU was initially a closed-source product available free of charge, but starting from version 1.3.0pre10 (February 2007), it was relicensed under the GNU General Public License. QEMU versions starting with 0.12.0 () support large memory which makes them incompatible with KQEMU. Newer releases of QEMU have completely removed support for KQEMU. QVM86 was a GNU GPLv2 licensed drop-in replacement for the then closed-source KQEMU. The developers of QVM86 ceased development in January, 2007. Kernel-based Virtual Machine (KVM) has mostly taken over as the Linux-based hardware-assisted virtualization solution for use with QEMU in the wake of the lack of support for KQEMU and QVM86. QEMU can also use KVM on other architectures like ARM and MIPS. Intel's Hardware Accelerated Execution Manager (HAXM) is an open-source alternative to KVM for x86-based hardware-assisted virtualization on NetBSD, Linux, Windows and macOS using Intel VT. Intel mostly solicits its use with QEMU for Android development. Starting with version 2.9.0, the official QEMU includes support for HAXM, under the name hax. QEMU also supports the following accelerators: hvf, Apple's based on Intel VT. whpx, Microsoft's Windows Hypervisor Platform based on Intel VT or AMD-V. tcg, QEMU's own Tiny Code Generator. This is the default. Supported disk image formats QEMU supports the following disk image formats: macOS Universal Disk Image Format (.dmg) – Read-only Bochs – Read-only Linux cloop – Read-only Parallels disk image (.hdd, .hds) – Read-only QEMU copy-on-write (.qcow2, .qed, .qcow, .cow) VirtualBox Virtual Disk Image (.vdi) Virtual PC Virtual Hard Disk (.vhd) Virtual VFAT VMware Virtual Machine Disk (.vmdk) Raw images (.img) that contain sector-by-sector contents of a disk CD/DVD images (.iso) that contain sector-by-sector contents of an optical disk (e.g. booting live OSes) QEMU Object Model The QEMU Object Model (QOM) provides a framework for registering user creatable types and instantiating objects from those types. QOM provides the following features: System for dynamically registering types Support for single-inheritance of types Multiple inheritance of stateless interfaces See also: Object model Hardware-assisted emulation The MIPS-compatible Loongson-3 processor adds 200 new instructions to help QEMU translate x86 instructions; those new instructions lower the overhead of executing x86/CISC-style instructions in the MIPS pipeline. With additional improvements in QEMU by the Chinese Academy of Sciences, Loongson-3 achieves an average of 70% the performance of executing native binaries while running x86 binaries from nine benchmarks. , no source code has been published for this fork, so the claim cannot be verified independently. Parallel emulation Virtualization solutions that use QEMU are able to execute multiple virtual CPUs in parallel. For user-mode emulation QEMU maps emulated threads to host threads. For full system emulation, QEMU is capable of running a host thread for each emulated virtual CPU (vCPU). This is dependent on the guest having been updated to support parallel system emulation, currently ARM, Alpha, HP-PA, PowerPC, RISC-V, s390x, x86 and Xtensa. Otherwise a single thread is used to emulate all virtual CPUS (vCPUS) which executes each vCPU in a round-robin manner. Integration VirtualBox VirtualBox, first released in January 2007, used some of QEMU's virtual hardware devices, and had a built-in dynamic recompiler based on QEMU. As with KQEMU, VirtualBox runs nearly all guest code natively on the host via the VMM (Virtual Machine Manager) and uses the recompiler only as a fallback mechanism - for example, when guest code executes in real mode. In addition, VirtualBox did a lot of code analysis and patching using a built-in disassembler in order to minimize recompilation. VirtualBox is free and open-source (available under GPL), except for certain features. Xen-HVM Xen, a virtual machine monitor, can run in HVM (hardware virtual machine) mode, using Intel VT-x or AMD-V hardware x86 virtualization extensions and ARM Cortex-A7 and Cortex-A15 virtualization extension. This means that instead of paravirtualized devices, a real set of virtual hardware is exposed to the domU to use real device drivers to talk to. QEMU includes several components: CPU emulators, emulated devices, generic devices, machine descriptions, user interface, and a debugger. The emulated devices and generic devices in QEMU make up its device models for I/O virtualization. They comprise a PIIX3 IDE (with some rudimentary PIIX4 capabilities), Cirrus Logic or plain VGA emulated video, RTL8139 or E1000 network emulation, and ACPI support. APIC support is provided by Xen. Xen-HVM has device emulation based on the QEMU project to provide I/O virtualization to the VMs. Hardware is emulated via a QEMU "device model" daemon running as a backend in dom0. Unlike other QEMU running modes (dynamic translation or KVM), virtual CPUs are completely managed to the hypervisor, which takes care of stopping them while QEMU is emulating memory-mapped I/O accesses. KVM KVM (Kernel-based Virtual Machine) is a FreeBSD and Linux kernel module that allows a user space program access to the hardware virtualization features of various processors, with which QEMU is able to offer virtualization for x86, PowerPC, and S/390 guests. When the target architecture is the same as the host architecture, QEMU can make use of KVM particular features, such as acceleration. Win4Lin Pro Desktop In early 2005, Win4Lin introduced Win4Lin Pro Desktop, based on a 'tuned' version of QEMU and KQEMU and it hosts NT-versions of Windows. In June 2006, Win4Lin released Win4Lin Virtual Desktop Server based on the same code base. Win4Lin Virtual Desktop Server serves Microsoft Windows sessions to thin clients from a Linux server. In September 2006, Win4Lin announced a change of the company name to Virtual Bridges with the release of Win4BSD Pro Desktop, a port of the product to FreeBSD and PC-BSD. Solaris support followed in May 2007 with the release of Win4Solaris Pro Desktop and Win4Solaris Virtual Desktop Server. SerialICE SerialICE is a QEMU-based firmware debugging tool running system firmware inside of QEMU while accessing real hardware through a serial connection to a host system. This can be used as a cheap replacement for hardware in-circuit emulators (ICE). WinUAE WinUAE introduced support for the CyberStorm PPC and Blizzard 603e boards using the QEMU PPC core in version 3.0.0. Unicorn Unicorn is a CPU emulation framework based on QEMU's "TCG" CPU emulator. Unlike QEMU, Unicorn focuses on the CPU only: no emulation of any peripherals is provided and raw binary code (outside of the context of an executable file or a system image) can be run directly. Unicorn is thread-safe and has multiple bindings and instrumentation interfaces. Emulated hardware platforms x86 Besides the CPU (which is also configurable and can emulate a number of Intel CPU models including (as of 3 March 2018) Sandy Bridge, Ivy Bridge, Haswell, Broadwell and Skylake), the following devices are emulated: CD/DVD-ROM drive using an ISO image Floppy disk drive ATA controller or Serial ATA AHCI controller Graphics card: Cirrus CLGD 5446 PCI VGA-card, Standard-VGA graphics card with Bochs-VBE, Red Hat QXL VGA Network card: Realtek 8139C+ PCI, NE2000 PCI, NE2000 ISA, PCnet, E1000 (PCI Intel Gigabit Ethernet) and E1000E (PCIe Intel Gigabit Ethernet) NVMe disk interface Serial port Parallel port PC speaker i440FX/PIIX3 or Q35/ICH9 chipsets PS/2 mouse and keyboard SCSI controller: LSI MegaRAID SAS 1078, LSI53C895A, NCR53C9x as found in the AMD PCscsi and Tekram DC-390 controllers Sound card: Sound Blaster 16, AudioPCI ES1370 (AC97), Gravis Ultrasound, and Intel HD Audio Watchdog timer (Intel 6300 ESB PCI, or iB700 ISA) USB 1.x/2.x/3.x controllers (UHCI, EHCI, xHCI) USB devices: Audio, Bluetooth dongle, HID (keyboard/mouse/tablet), MTP, serial interface, CAC smartcard reader, storage (bulk-only transfer and USB Attached SCSI), Wacom tablet Paravirtualized VirtIO devices: block device, network card, SCSI controller, video device, serial interface, balloon driver, 9pfs filesystem driver Paravirtualized Xen devices: block device, network card, console, framebuffer and input device The BIOS implementation used by QEMU starting from version 0.12 is SeaBIOS. The VGA BIOS implementation comes from Plex86/Bochs. The UEFI firmware for QEMU is OVMF. PowerPC PowerMac QEMU emulates the following PowerMac peripherals: UniNorth PCI bridge PCI-VGA-compatible graphics card which maps the VESA Bochs Extensions Two PMAC-IDE-Interfaces with hard disk and CD-ROM support. NE2000 PCI adapter Non-volatile RAM VIA-CUDA with ADB keyboard and mouse. OpenBIOS is used as the firmware. PREP QEMU emulates the following PREP peripherals: PCI bridge PCI VGA-compatible graphics card with VESA Bochs Extensions Two IDE interfaces with hard disk and CD-ROM support Floppy disk drive NE2000 network adapter Serial interface PREP non-volatile RAM PC-compatible keyboard and mouse On the PREP target, Open Hack'Ware, an Open-Firmware-compatible BIOS, is used. IBM System p QEMU can emulate the paravirtual sPAPR interface with the following peripherals: PCI bridge, for access to virtio devices, VGA-compatible graphics, USB, etc. Virtual I/O network adapter, SCSI controller, and serial interface sPAPR non-volatile RAM On the sPAPR target, another Open-Firmware-compatible BIOS is used, called SLOF. ARM ARM32 QEMU emulates the ARMv7 instruction set (and down to ARMv5TEJ) with NEON extension. It emulates full systems like Integrator/CP board, Versatile baseboard, RealView Emulation baseboard, XScale-based PDAs, Palm Tungsten|E PDA, Nokia N800 and Nokia N810 Internet tablets etc. QEMU also powers the Android emulator which is part of the Android SDK (most current Android implementations are ARM-based). Starting from version 2.0.0 of their Bada SDK, Samsung has chosen QEMU to help development on emulated 'Wave' devices. In 1.5.0 and 1.6.0 Samsung Exynos 4210 (dual-core Cortex a9) and Versatile Express ARM Cortex-A9 ARM Cortex-A15 are emulated. In 1.6.0, the 32-bit instructions of the ARMv8 (AARCH64) architecture are emulated, but 64-bit instructions are unsupported. The Xilinx Cortex A9-based Zynq SoC is modeled, with the following elements: Zynq-7000 ARM Cortex-A9 CPU Zynq-7000 ARM Cortex-A9 MPCore Triple Timer Counter DDR Memory Controller DMA Controller (PL330) Static Memory Controller (NAND/NOR Flash) SD/SDIO Peripheral Controller (SDHCI) Zynq Gigabit Ethernet Controller USB Controller (EHCI - Host support only) Zynq UART Controller SPI and QSPI Controllers I2C Controller ARM64 SPARC QEMU has support for both 32- and 64-bit SPARC architectures. When the firmware in the JavaStation (sun4m-Architecture) became version 0.8.1 Proll, a PROM replacement used in version 0.8.2, was replaced with OpenBIOS. SPARC32 QEMU emulates the following sun4m/sun4c/sun4d peripherals: IOMMU or IO-UNITs TCX Frame buffer (graphics card) Lance (Am7990) Ethernet Non-volatile RAM M48T02/M48T08 Slave I/O: timers, interrupt controllers, Zilog serial ports, keyboard and power/reset logic ESP SCSI controller with hard disk and CD-ROM support Floppy drive (not on SS-600MP) CS4231 sound device (only on SS-5, not working yet) SPARC64 Emulating Sun4u (UltraSPARC PC-like machine), Sun4v (T1 PC-like machine), or generic Niagara (T1) machine with the following peripherals: UltraSparc IIi APB PCI Bridge PCI VGA-compatible card with VESA Bochs Extensions PS/2 mouse and keyboard Non-volatile RAM M48T59 PC-compatible serial ports 2 PCI IDE interfaces with hard disk and CD-ROM support Floppy disk MicroBlaze Supported peripherals: MicroBlaze with/without MMU, including AXI Timer and Interrupt controller peripherals AXI External Memory Controller AXI DMA Controller Xilinx AXI Ethernet AXI Ethernet Lite AXI UART 16650 and UARTLite AXI SPI Controller LatticeMico32 Supported peripherals: From the Milkymist SoC UART VGA Memory card Ethernet pfu timer CRIS OpenRISC Others External trees exist, supporting the following targets: Zilog Z80 emulating a Sinclair 48K ZX Spectrum HP PA-RISC RISC-V See also qcow Comparison of platform virtualization software Mtools OVPsim Q SIMH SPIM GXemul GNOME Boxes References External links Systems emulation with QEMU an IBM developerWorks article by M. Tim Jones QVM86 project page Debian on an emulated ARM machine Fedora ARM port emulation with QEMU The Wikibook "QEMU and KVM" (in German, or computer translated to English) QEMU on Windows QEMU Binaries for Windows Microblaze emulation with QEMU QEMU speed comparison UnifiedSessionsManager - An unofficial QEMU/KVM configuration file definition Couverture, a code coverage project based on QEMU QOM documentation pages https://github.com/qemu/qemu Android emulation software Cross-platform free software Free emulation software Free virtualization software Linux emulation software MacOS emulation software PowerPC emulators Software that uses Meson Windows emulation software X86 emulators
1126222
https://en.wikipedia.org/wiki/Tirunelveli
Tirunelveli
Tirunelveli (), also known as Nellai and historically (during British rule) as Tinnevelly, is a major city in the Indian state of Tamil Nadu. It is the administrative headquarters of the Tirunelveli District. It is the sixth-largest municipal corporation in the state after Chennai, Coimbatore, Madurai, Tiruchirappalli and Salem. Tirunelveli is located southwest of the state capital Chennai, away from Thoothukudi and from Kanyakumari. The downtown is located on the west bank of the Thamirabarani River; its twin Palayamkottai is on the east bank. Palayamkottai is called the Oxford of South India as it is a hub of many schools and colleges. It has many important government offices. Tirunelveli is an ancient city, more than 2,000 years old. It is believed to be an ancient settlement of great importance. It has been ruled at different times by the Early Pandyas, the Cheras, the Medieval Cholas and Later Cholas, the later Pandyas, the Vijayanagar Empire and the British. The Polygar War, involving Palaiyakkarars led by Veerapandiya Kattabomman and forces of the British East India Company, was waged on the city's outskirts from 1797 to 1801. Tirunelveli is administered by the Municipal Corporation, established on 1 June 1994 by the Municipal Corporation Act. The city covers an area of , and had a population of 473,637 in 2011 excluding some municipal corporation region then. The total population after expansion is 968,984. Tirunelveli is well-connected by road and rail with the rest of Tamil Nadu and India. The nearest domestic airport is Thoothukudi Airport. The nearest International Airports are Madurai International Airport and Thiruvananthapuram International Airport. The nearest Seaport is Thoothukudi Port. Industries in Tirunelveli include administrative services, agricultural trading, tourism, banking, agricultural machinery, information technology and educational services. The city is an educational hub of southern India, with institutions such as Anna University Regional Campus - Tirunelveli, Tirunelveli Medical College, The Tirunelveli Veterinary College and Research Institution, Tirunelveli Law College, the Government College of Engineering, Manonmaniam Sundaranar University and much more. Tirunelveli has a number of historical monuments, the Swami Nellaiappar Temple being the most prominent. Tirunelveli is famous for a sweet called 'Iritu kadai halwa'. Etymology Tirunelveli is one of the many temple towns in the state which is named after the groves, clusters or forests dominated by a particular variety of a tree or shrub and the same variety of tree or shrub sheltering the presiding deity. The region is believed to have been covered with Venu forest and hence called Venuvanam. Tirunelveli was known in Sambandar's seventh-century Saiva canonical work Tevaram as Thirunelveli. Swami Nellaiappar temple inscriptions say that Shiva (as Vrihivriteswara) descended in the form of a hedge and roof to save the paddy crop of a devotee. In Hindu legend, the place was known as Venuvana ("forest of bamboo") due to the presence of bamboo in the temple under which the deity is believed to have appeared. The early Pandyas named the city Thenpandya Nadu or Thenpandya Seemai, the Cholas Mudikonda Cholamandalam and the Nayaks Tirunelveli Seemai; it was known as Tinnelvelly by the British, and Tirunelveli after independence. The word Tirunelveli is derived from three Tamil words: thiru, nel and veli, meaning "sacred paddy hedge". History Tirunelveli was under the rule of Pandya kings as their secondary capital; Madurai was the empire's primary capital. The Pandya dynasty in the region dates to several centuries before the Christian era from inscriptions by Ashoka (304–232 BCE) and mention in the Mahavamsa, the Brihat-Samhita and the writings of Megasthenes (350–290 CE). The province came under the rule of Cholas under Rajendra Chola I in 1064 CE; however, it is unclear whether he conquered the region or obtained it voluntarily. Tirunelveli remained under the control of the Cholas until the early 13th century, when the second Pandyan empire was established with Madurai as its capital. The Nellaiappar temple was the royal shrine of the later Pandyas during the 13th and 14th centuries, and the city benefited from dams constructed with royal patronage during the period. After the death of Kulasekara Pandian (1268–1308), the region was occupied by Vijayangara rulers and Marava chieftains (palayakarars, or poligars) during the 16th century. The Maravars occupied the western foothills and the Telugas, and the Kannadigas settled in the black-soil-rich eastern portion. Tirunelveli was the subsidiary capital of the Madurai Nayaks; under Viswanatha Nayak (1529–64), the city was rebuilt about 1560. Inscriptions from the Nellaiappar temple indicate the generous contributions to the temple. Nayak rule ended in 1736. The region was captured by the subjects of the Mughal Empire such as Chanda Sahib (1740–1754) who declared himself "Nawab of Tinnevelly" as well as the Nawab of the Carnatic. In 1743 Nizam-ul-mulk, lieutenant of the Deccan Plateau, displaced most of the Marathas from the region and Tirunelveli came under the rule of the Nawabs of Arcot. The original power lay in the hands of the polygars, who were originally military chiefs of the Nayaks. The city was known as Nellai Cheemai, with Cheemai meaning "a developed foreign town". The polygars built forts in the hills, had 30,000 troops and waged war among themselves. In 1755, the British government sent a mission under Major Heron and Mahfuz Khan which restored some order and bestowed the city to Mahfuz Khan. The poligars waged war against Mahfuz Khan seven miles from Tirunelveli, but were defeated. The failure of Mahfuz Khan led the East India Company to send Muhammed Yusuf for help. Khan became ruler, rebelled in 1763 and was hanged in 1764. In 1758, British troops under Colonel Fullarton reduced the polygar stronghold under Veerapandiya Kattabomman. In 1797, the first Polygar war broke out between the British (under Major Bannerman) and the polygars (headed by Kattabomman). Some polygars (such as the head of Ettaiyapuram) aided the British; Kattabomman was defeated and hanged in his home province of Panchalaguruchi. Two years later, another rebellion became known as the Second Polygar War. Panchalankuruchi fell to the British, after stiff resistance. The Carnatic region came under British rule thereafter. Tirunelveli District was formed on September 1, 1790 (Tirunelveli Day) by the East India Company (British) and named it as Tinnevelly district. The history of Tirunelveli was researched by Robert Caldwell (1814–91), a Christian missionary who visited the area. After acquiring Tirunelveli from the Nawab of Arcot in 1801, the British anglicised its name to "Tinnevelly" and made it the headquarters of Tinnelvelli District. The administrative and military headquarters was located in Palayamkottai (anglicised as "Palankottah"), from which attacks against the polygars were launched. After independence both cities reverted to their original names, and Tirunelveli remained the capital of Tirunelveli district. In early 1900s, parts of Tirunelveli district was made as Ramanathapuram and Virudhunagar districts. In 1986, Tirunelveli district was further split into two districts for administrative purpose such as Chidambaranar (present-day Thoothukudi district) and Nellai-Kattabomman (later Tirunelveli-Kattabomman, present-day Tirunelveli districts). In 2019, Tenkasi was split from Tirunelveli District forming Tenkasi District. Geography Tirunelveli is located at , and its average elevation is . It is located at the southernmost tip of the Deccan plateau. The Tamirabarani River divides the city into the Tirunelveli quarter and the Palayamkottai area. The river (with its tributaries, such as the Chittar) is the major source of irrigation, and is fed by the northeast and southwest monsoons. There are several small lakes of ponds (known as Kulam) in the city. These include Nainar Kulam, Veinthan Kulam, Elantha Kulam and Udayarpetti Kulam. The area around the Tamirabarani River and the Chittar has five streams: Kodagan, Palayan, Tirunelveli, Marudur East and Marudur West, and the Chittar feeds fifteen other channels. The soil is friable, red and sandy. Climate Tirunelveli has a hot semi-arid climate (Köppen: BSh) bordering on the relatively rare dry-summer tropical savanna climate (Köppen: As), scattered irregularly across the world but relatively common in areas near the Laccadive Sea. The climate of Tirunelveli is generally hot and humid. The average temperature during summer (March to June) ranges from to , and to during the rest of the year. The average annual rainfall is . Maximum precipitation occurs during the northeast monsoon (October–December). Since the economy of the district is primarily based on agriculture, flooding of the Tamarabarani River or a fluctuation in monsoon rain has an immediate impact on the local economy. The primary crops grown in the region are paddy and cotton. Pineapples were introduced during the 16th century, chilly and tobacco during the late 16th and potatoes during the early 17th centuries. The most common tree is the palmyra palm, a raw material in cottage industries. Other trees grown in the region are teak, wild jack, manjakadambu, venteak, vengai, pillaimaruthu, karimaruthu and bamboo. Livestock of the city and district comprises cattle, buffalo, goats, sheep and other animals in smaller numbers. Demographics According to 2011 census, Tirunelveli had a population of 473,637 with a sex-ratio of 1,027 females for every 1,000 males, much above the national average of 929. A total of 46,624 were under the age of six, constituting 23,894 males and 22,730 females. Scheduled Castes and Scheduled Tribes accounted for 13.17% and 0.32% of the population respectively. The average literacy of the city was 81.49%, compared to the national average of 72.99%. The city had a total of 120,466 households. There were a total of 182,471 workers, comprising 2,088 cultivators, 5,515 main agricultural labourers, 18,914 in house hold industries, 142,435 other workers, 13,519 marginal workers, 166 marginal cultivators, 913 marginal agricultural labourers, 1,828 marginal workers in household industries and 10,612 other marginal workers. According to provisional data from the 2011 census the Tirunelveli urban agglomeration had a population of 498,984, with 246,710 males and 252,274 females. The overall sex ratio in the city was 1023, and the child sex ratio was 957. Tirunelveli had a literacy rate of 91 percent, with male literacy 95 percent and female literacy 87 percent. A total of 42,756 of the city's population was under age six. As per the religious census of 2011, Tirunelveli had 69.0% Hindus, 20.02% Muslims, 10.59% Christians, 0.01% Sikhs, 0.01% Buddhists, 0.02% Jains and 0.35% following other religions. The city covers an area of . The population density of the city in the 2001 census was 3,781 persons per square kilometre, compared with 2,218 persons per square kilometre in 1971. Hindus form the majority of the urban population, followed by Muslims and Christians. Tamil is the main language spoken in the city, but the use of English is relatively common; English is the medium of instruction in most educational institutions and offices in the service sector. The Tamil dialect spoken in this region is distinct, and is widely spoken throughout Tamil Nadu. Economy Inscriptions from the eighth to the 14th centuries (during the rule of the Pandyas, Cholas and later Tenkasi Pandyas) indicate the growth of Tirunelveli as a centre of economic growth which developed around the Nellaiappar temple. The drier parts of the province also flourished during the rule of the Vijayanagara kings. From 1550 until the early modern era, migration to the city from other parts of the state was common and the urban regions became hubs of manufacturing and commerce. Tirunelveli was a strategic point, connecting the eastern and western parts of the peninsula, as well as a trading centre. Records of sea and overland trade between 1700 and 1850 indicate close trading connections with Sri Lanka and Kerala. During the 1840s, cotton produced in the region was in demand for British mills. The chief exports during British rule were cotton, jaggery, chillies, tobacco, palmyra fibre, salt, dried saltwater fish and cattle. Occupations in Tirunelveli include service-sector activities such as administration, agricultural trading, tourism, banking, agro-machinery, cement manufacturing, information technology and educational services. In 1991, the Tirunelveli region ranked second in the number of women workers. Service sectors such as tourism have developed, due to a growth in religious tourism. Tirunelveli has beedi and cement factories, tobacco companies, workshops for steel-based products and mills for cotton textiles, spinning and weaving; there are also small-scale industries, such as tanneries and brick kilns. The agricultural areas, hand-woven clothes and household industries contribute to the economic growth of the city. Food-processing industries have developed since the late 1990s; at the district level, it is the foremost industrial segment. Industries involving rice-making, blue-jelly metal manufacturing and jem power generating are located on the outskirts of the city. The major agricultural produces in the region are paddy and cotton. Beedi production during the 1990s earned an annual revenue of 190 billion and a foreign exchange of 8 billion across the three districts of Tirunelveli, Tiruchirapalli and Vellore. Tirunelveli is a major area for wind-power generation. Most wind-power-generation units in Tamil Nadu are located in Tirunelveli and Kanyakumari Districts. In 2005 they contributed 2036.9 MW to the state power-generation capacity. Many private, multinational wind companies are located on the outskirts of the city. In June 2007 the Tata Group signed a memorandum of understanding with the state government to open a titanium dioxide plant, with an estimated value of 25 billion, in Tirunelveli District and Thoothukudi District. However, the state government put the project on hold after increasing protests against it. Tirunelveli has two Special Economic Zones in the outskirts of the city, one located in Gangaikondan in the North along NH 44 and another one located in Nanguneri in South along NH 44. They house several MNCs such as Atos Syntel, Coca-Cola, Yokohama Rubber Company, Indian Oil Corporation Limited, Novo Carbon Private Limited, Bosch Limited, Alliance Tyre Group, Ramco Cements, India Cements, and the like. Administration and politics The Tirunelveli Municipality was established in 1866 during British rule. It became a City Municipal Corporation in 1994, bringing the Palayamkottai and Melapalayam municipalities, the Thatchanallur town panchayat and eleven other village panchayats within the city limits. The municipal corporation has five zones: Tirunelveli, Thatchanallur, Palayamkottai, Pettai and Melapalayam. The corporation has 55 wards, with an elected councillor for each ward. The corporation has six departments: general administration and personnel, engineering, revenue, public health, city planning and information technology (IT). All departments are under the control of a municipal commissioner. Legislative power is vested in a body of 55 members, one from each ward. The legislative body is headed by an elected chairperson, assisted by a deputy. Tirunelveli city is the district headquarters for the Tirunelveli district. The city is part of the Tirunelveli assembly constituency, electing a member to the Tamil Nadu Legislative Assembly every five years. Since the 1977 elections, the assembly seat was held by the Dravida Munnetra Kazhagam (DMK) for three terms (following the 1989, 1996 and 2006 elections) and the All India Anna Dravid Munnetra Kazhagam (AIADMK) for six terms following the 1977, 1980, 1984, 1991, 2001 and 2011 elections. The current MLA is Nainar Nagendran, ex-minister, and legislative party leader of BJP Tirunelveli is a part of the Tirunelveli Lok Sabha constituency, which contains six assembly constituencies: Tirunelveli, Nanguneri, Ambasamudram, Alangulam, Radhapuram and Palayamkottai. The current Member of Parliament from the constituency is S. Gnanathiraviam of the DMK. Since 1957, the Tirunelveli parliament seat was held by the Indian National Congress for four terms: 1957–1961, 1962–67, 2004–09 and 2009–14. The Swantantra Party and the CPI won once each, from 1967 to 1971 and 1971–77 respectively. The DMK won the seat twice: 1980–84 and 1996–98. The ADMK won the seat seven times: 1977–80, 1984–89, 1989–91, 1991–96, 1998, 1999–2004 and 2014 elections. Law and Order of the city is maintained by the Tirunelveli City division of the Tamil Nadu Police, headed by a commissioner. There are units for prohibition enforcement, district crime, social justice and human rights, district crime records and a special branch operating at the district level, each headed by a deputy superintendent of police. Transport Tirunelveli has an extensive transport network and is well-connected to other major cities by road, rail and air. The corporation maintains a total of of roads. The city has of concrete roads, of BT roads, of water-bound macadam roads, of unpaved roads and of highways. Twenty-two kilometres (fourteen miles) of highway are maintained by the State Highways Department and thirty kilometres (nineteen miles) by the National Highways Department. In 1844, a bridge was built by Colonel Horsley across the Tamirabarani River, connecting Tirunelveli to Palayamkottai. The city is located on NH 44, south of Madurai and north of Kanyakumari. NH 138 connects Palayamkottai with Tuticorin Port. Tirunelveli is also connected by major highways to Kollam, Tiruchendur, Rajapalayam, Sankarankovil, Ambasamudram and Nazareth. The main bus stand (popularly known as the New Bus Stand), opened in 2003, is located in Veinthaankulam and there is regular bus service to and from the city. The main bus stand has been developed under the Smart City Projects at a total cost of Rs. 50.72 crores (500.72 million). Renamed as the Bharat Ratna Dr. MGR Bus Stand, it was inaugurated by Chief Minister M.K. Stalin on 8th December 2021 through video conferencing. The other bus stands (for intracity services) are the Junction and Palay bus stands. The Tamil Nadu State Transport Corporation has daily services to a number of cities, and the corporation operates a computerised reservation centre in the main bus stand. It also operates local buses serving the city and neighbouring villages. The Periyar bus stand commonly known as the old bus stand is fully demolished and a new bus station will be built under smart city plan. The State Express Transport Corporation has intercity services to Bangalore, Chennai, Kanyakumari, Trivandrum and other cities. Tirunelveli Junction railway station is one of the oldest railway stations in India. The line from Tirunelveli to Sengottai was opened in 1903; the connection to Quilon, which was completed later, was the most important trade route to Travancore province in British India. The city is connected to major cities in all four directions: Madurai and Sankarankovil to the north, Nagercoil and Trivandrum to the south, Sengottai and Kollam to the west and Tiruchendur to the east. Tirunelveli is also connected to major Indian cities with daily services to Chennai, Coimbatore, Tiruchirappalli, Madurai, Kanyakumari, Salem, Tirupati, Bangalore, Hyderabad, Mangalore, Ernakulam, Trivandrum, Mumbai, Guruvayur, Kolkata, Jabalpur, Varanasi, Delhi, Jammu, Kollam, Palghat and Ahmedabad. There are daily passenger services to Tuticorin, Madurai, Tiruchendur, Tiruchirapalli, Coimbatore, Mayiladuthurai, Nagercoil, Palghat and Kollam. The nearest airport to Tirunelveli is Tuticorin Airport (TCR) at Vaagaikulam in Thoothukkudi District, east of the city, which offers daily flights to Chennai and Bangalore. The nearest international airports are Madurai International Airport, away and Thiruvananthapuram International Airport (TRV), about away. Culture Nellaiappar Temple is a Hindu temple dedicated to Shiva in the form of Nellaiappar. The deity is revered in the verses of Tevaram, a seventh-century Saiva work by Sambandar. The temple was greatly expanded during the 16th-century Nayak period and has a number of architectural attractions, including musical pillars. The temple has several festivals, the foremost an annual festival when the temple chariot is brought around the streets near the temple. It is one of the Pancha Sabhai temples, the five royal courts of Nataraja (the dancing form of Shiva), where he performed a cosmic dance. The Nataraja shrine in the temple represents copper, and features many copper sculptures. Tirunelveli has its fair share of temples, dating back to ancient times. It also prides itself as being the site where the Nellaiappar Temple is located Tirunelveli is also known for halwa, a sweet made of wheat, sugar and ghee. It originated during the mid-1800s at Lakshmi Vilas Stores, which still exist. The art of sweet-making spread to other parts of Tamil Nadu, such as Nagercoil, Srivilliputhur and Thoothukudi. Tirunelveli halwa was popularised by Iruttukadai Halwa, a shop opened in 1900 which sells the sweets only during twilight. Tirunelveli has a number of cinemas which predominantly play Tamil movies. It is among the 40 cities in India with FM radio stations. Tirunelveli's stations are Tirunelveli Vanoli Nilayam (All India Radio, from the Government of India), Suryan FM (operated by Sun Network on 93.5 MHz) and Hello FM (operated by the Malai Malar Group on 106.4 MHz). A number of state- and national-level sports events are sponsored in Tirunelveli annually. The VOC grounds (in central Palayamkottai) and the Anna Stadium (on St. Thomas Road) are popular venues in the city, and some events are held at scholastic sports facilities. As in India generally, the most popular sport is cricket. Also popular are football, volleyball, swimming and hockey, played on facilities provided by the Tirunelveli Division of the Sports Development Authority of Tamil Nadu. The Government Exhibition, an annual event at Exhibition Grounds, attracts thousands of visitors from in and around Tirunelveli. the District Science Center - Tirunelveli is in the centre of the city. Near the city are regional tourist attractions such as the Manimuthar and Papanasam Dams, the Ariakulam and Koonthakulam Bird Sanctuaries, Manjolai and Upper Kodaiyar. Education During the 1790s, Tamil Christians established a number of schools in Tirunelveli. The missionary educational system included primary and boarding schools, seminaries, industrial schools, orphanages and colleges. The first boarding school for girls was opened in 1821, but its efforts were hampered by the emphasis on Christian education. Thomas Munro (1761–1827) of the British East India Company established a two-tier school system: district schools, teaching law, and sub-district schools teaching vernacular languages in the Madras Presidency. Tirunelveli had four sub-district schools: two teaching Tamil and one each for Telugu and Persian. Tirunelveli city has 80 schools: 29 higher secondary schools, 12 high schools, 22 middle schools and 17 primary schools; the city corporation operates 33 of these schools. The city has eight arts and science colleges and six professional colleges. The Manonmaniam Sundaranar University is named for poet Manonmaniam Sundaranar, who wrote "Tamil Thai Vazhthu" the state anthem. Most Christian schools and colleges in the city are located in the Palayamkottai area. Anna University of Technology Tirunelveli was established in 2007, offering a variety of engineering and technology courses for undergraduate and graduate students. Tirunelveli Medical College, Veterinary College and Research Institution and the Government College of Engineering, Tirunelveli are professional colleges operated by the government of Tamil Nadu. The Jesuit 'St. Xavier's College' and St. John's College (operated by the Church of South India diocese), MDT Hindu College, Sadakathulla Appa College and Sarah Tucker College are notable arts colleges. The Indian Institute of Geomagnetism (IIG) operates a regional unit, the Equatorial Geophysical Research Laboratory, conducting research in geomagnetism and atmospheric and space sciences. The city has a District Science Centre (a satellite unit of Visvesvaraya Industrial and Technological Museum, Bangalore) with permanent exhibitions, science shows, interactive self-guided tours, a mini-planetarium and sky observation. Tirunelveli and the district have a high rate of child labour. The drop in female school attendance between ages 15 and 19 is almost four times greater than that in the rest of Tamil Nadu. Utilities Electric service to Tirunelveli is regulated and distributed by the Tamil Nadu Electricity Board (TNEB). The city is headquarters for the Tirunelveli region of the four-division TNEB and, with its suburbs, forms the Tirunelveli Electricity Distribution Circle. A chief distribution engineer is stationed at regional headquarters. Water supply is provided by the Tirunelveli City Corporation from the Tamirabarani River, throughout the city. About 100 metric tonnes of solid waste are collected from the city daily in door-to-door collection; source segregation and disposal is performed by the sanitary department of the Tirunelveli Municipal Corporation. The underground drainage system was constituted in 1998, covering 22 percent of the corporation area. The remaining system for disposal of sewage is through septic tanks and public conveniences. The corporation maintains a total of of stormwater drains, 27 percent of the total road length. The clinics operated by the corporation provide primary health care to the urban poor through family-welfare and immunisation programs. In addition, there are private hospitals and clinics providing health care to citizens. Tirunelveli is part of the Tirunelveli Telecom District of Bharat Sanchar Nigam Limited (BSNL), India's state-owned telecom and internet-services provider. Both Global System for Mobile Communications (GSM) and Code division multiple access (CDMA) mobile services are available. In addition to telecommunications, BSNL also provides broadband internet service. Tirunelveli is one of a few cities in India where BSNL's Caller Line Identification (CLI)-based internet service, Netone, is available. The city has a Passport Seva Kendra, a public-private-sector collaboration, which accepts passport applications from the Tirunelveli region for the passport office in Madurai. Notes References . . . . . External links Tirunelveli City-Municipal Corporation Cities and towns in Tirunelveli district
13070288
https://en.wikipedia.org/wiki/.pkg
.pkg
.pkg (package) is a filename extension used for several file formats that contain packages of software and other files to be installed onto a certain device, operating system, or filesystem, such as the macOS, iOS, PlayStation Vita, PlayStation 3, and PlayStation 4. The macOS and iOS operating systems made by Apple use .pkg extensions for Apple software packages using the Xar format internally. PlayStation Vita, Sony PlayStation 3, Sony PlayStation 4 — used for installation of PlayStation Vita, PlayStation 3 and PlayStation 4 software, applications, homebrew, and DLC from the PlayStation Store Solaris, or SunOS operating system (OS) and illumos - Denotes software packages that can be installed, removed and tracked using the pkgadd, pkgrm,and pkginfo commands. Solaris is a derivative of the AT&T UNIX OS, and the .pkg extension was also used on AT&T UNIX System V OS. AT&T UNIX System V .pkg files are cpio archives that contain specific file tree structures. Symbian use .pkg files to store configuration information used to generate .sis installer packages. BeOS Used .pkg files in the 90's as part of their software package platform. Be Inc. bought Starcode Software Inc. and acquired their packaging tools. Apple Newton operating system used files ending in .pkg for Newton applications and software. As a result, when seen from the Mac OS X Finder, Newton applications appear the same as Mac OS X Installer packages, however they do not share their file format. PTC/CoCreate 3D Modeling application use .pkg files to store model files. This .pkg file uses the zip file format. Microsoft is said to use .pkg files for profile storage on Xbox Live. L3 Avionics systems use some .pkg files for software updates. See also List of software package management systems References PKG
16920259
https://en.wikipedia.org/wiki/Free%20content
Free content
Free content, libre content, or free information is any kind of functional work, work of art, or other creative content that meets the definition of a free cultural work. Definition A free cultural work is, according to the definition of Free Cultural Works, one that has no significant legal restriction on people's freedom to: use the content and benefit from using it, study the content and apply what is learned, make and distribute copies of the content, change and improve the content and distribute these derivative works. Free content encompasses all works in the public domain and also those copyrighted works whose licenses honor and uphold the freedoms mentioned above. Because the Berne Convention in most countries by default grants copyright holders monopolistic control over their creations, copyright content must be explicitly declared free, usually by the referencing or inclusion of licensing statements from within the work. Although there are a great many different definitions in regular everyday use, free content is legally very similar, if not like an identical twin, to open content. An analogy is a use of the rival terms free software and open-source, which describe ideological differences rather than legal ones. For instance, the Open Knowledge Foundation's Open Definition describes "open" as synonymous to the definition of free in the "Definition of Free Cultural Works" (as also in the Open Source Definition and Free Software Definition). For such free/open content both movements recommend the same three Creative Commons licenses, the CC BY, CC BY-SA, and CC0. Legal matters Copyright Copyright is a legal concept, which gives the author or creator of a work legal control over the duplication and public performance of their work. In many jurisdictions, this is limited by a time period after which the works then enter the public domain. Copyright laws are a balance between the rights of creators of intellectual and artistic works and the rights of others to build upon those works. During the time period of copyright the author's work may only be copied, modified, or publicly performed with the consent of the author, unless the use is a fair use. Traditional copyright control limits the use of the work of the author to those who either pay royalties to the author for usage of the author's content or limit their use to fair use. Secondly, it limits the use of content whose author cannot be found. Finally it creates a perceived barrier between authors by limiting derivative works, such as mashups and collaborative content. Public domain The public domain is a range of creative works whose copyright has expired or was never established, as well as ideas and facts which are ineligible for copyright. A public domain work is a work whose author has either relinquished to the public or no longer can claim control over, the distribution and usage of the work. As such, any person may manipulate, distribute, or otherwise use the work, without legal ramifications. A work in the public domain or released under a permissive license may be referred to as "copycenter". Copyleft Copyleft is a play on the word copyright and describes the practice of using copyright law to remove restrictions on distributing copies and modified versions of a work. The aim of copyleft is to use the legal framework of copyright to enable non-author parties to be able to reuse and, in many licensing schemes, modify content that is created by an author. Unlike works in the public domain, the author still maintains copyright over the material, however, the author has granted a non-exclusive license to any person to distribute, and often modify, the work. Copyleft licenses require that any derivative works be distributed under the same terms and that the original copyright notices be maintained. A symbol commonly associated with copyleft is a reversal of the copyright symbol, facing the other way; the opening of the C points left rather than right. Unlike the copyright symbol, the copyleft symbol does not have a codified meaning. Usage Projects that provide free content exist in several areas of interest, such as software, academic literature, general literature, music, images, video, and engineering. Technology has reduced the cost of publication and reduced the entry barrier sufficiently to allow for the production of widely disseminated materials by individuals or small groups. Projects to provide free literature and multimedia content have become increasingly prominent owing to the ease of dissemination of materials that are associated with the development of computer technology. Such dissemination may have been too costly prior to these technological developments. Media In media, which includes textual, audio, and visual content, free licensing schemes such as some of the licenses made by Creative Commons have allowed for the dissemination of works under a clear set of legal permissions. Not all Creative Commons licenses are entirely free; their permissions may range from very liberal general redistribution and modification of the work to a more restrictive redistribution-only licensing. Since February 2008, Creative Commons licenses which are entirely free carry a badge indicating that they are "approved for free cultural works". Repositories exist which exclusively feature free material and provide content such as photographs, clip art, music, and literature. While extensive reuse of free content from one website in another website is legal, it is usually not sensible because of the duplicate content problem. Wikipedia is amongst the most well-known databases of user-uploaded free content on the web. While the vast majority of content on Wikipedia is free content, some copyrighted material is hosted under fair-use criteria. Software Free and open-source software, which is also often referred to as open source software and free software, is a maturing technology with major companies using free software to provide both services and technology to both end-users and technical consumers. The ease of dissemination has allowed for increased modularity, which allows for smaller groups to contribute to projects as well as simplifying collaboration. Open source development models have been classified as having a similar peer-recognition and collaborative benefit incentives that are typified by more classical fields such as scientific research, with the social structures that result from this incentive model decreasing production cost. Given sufficient interest in a software component, by using peer-to-peer distribution methods, distribution costs of software may be reduced, removing the burden of infrastructure maintenance from developers. As distribution resources are simultaneously provided by consumers, these software distribution models are scalable, that is the method is feasible regardless of the number of consumers. In some cases, free software vendors may use peer-to-peer technology as a method of dissemination. In general, project hosting and code distribution is not a problem for the most of free projects as a number of providers offer them these services free. Engineering and technology Free content principles have been translated into fields such as engineering, where designs and engineering knowledge can be readily shared and duplicated, in order to reduce overheads associated with project development. Open design principles can be applied in engineering and technological applications, with projects in mobile telephony, small-scale manufacture, the automotive industry, and even agricultural areas. Technologies such as distributed manufacturing can allow computer-aided manufacturing and computer-aided design techniques to be able to develop small-scale production of components for the development of new, or repair of existing, devices. Rapid fabrication technologies underpin these developments, which allow end-users of technology to be able to construct devices from pre-existing blueprints, using software and manufacturing hardware to convert information into physical objects. Academia In academic work, the majority of works are not free, although the percentage of works that are open access is growing rapidly. Open access refers to online research outputs that are free of all restrictions on access (e.g. access tolls) and free of many restrictions on use (e.g. certain copyright and license restrictions). Authors may see open access publishing as a method of expanding the audience that is able to access their work to allow for greater impact of the publication, or may support it for ideological reasons. Open access publishers such as PLOS and BioMed Central provide capacity for review and publishing of free works; though such publications are currently more common in science than humanities. Various funding institutions and governing research bodies have mandated that academics must produce their works to be open-access, in order to qualify for funding, such as the US National Institutes of Health, Research Councils UK (effective 2016) and the European Union (effective 2020). At an institutional level some universities, such as the Massachusetts Institute of Technology, have adopted open access publishing by default by introducing their own mandates. Some mandates may permit delayed publication and may charge researchers for open access publishing. Open content publication has been seen as a method of reducing costs associated with information retrieval in research, as universities typically pay to subscribe for access to content that is published through traditional means whilst improving journal quality by discouraging the submission of research articles of reduced quality. Subscriptions for non-free content journals may be expensive for universities to purchase, though the article are written and peer-reviewed by academics themselves at no cost to the publisher. This has led to disputes between publishers and some universities over subscription costs, such as the one which occurred between the University of California and the Nature Publishing Group. For teaching purposes, some universities, including , provide freely available course content, such as lecture notes, video resources and tutorials. This content is distributed via Internet resources to the general public. Publication of such resources may be either by a formal institution-wide program, or alternately via informal content provided by individual academics or departments. Legislation Any country has its own law and legal system, sustained by its legislation, a set of law-documents — documents containing statutory obligation rules, usually law and created by legislatures. In a democratic country, each law-document is published as open media content, is in principle free content; but in general, there are no explicit licenses attributed for each law-document, so the license must be interpreted, an implied license. Only a few countries have explicit licenses in their law-documents, as the UK's Open Government Licence (a compatible license). In the other countries, the implied license comes from its proper rules (general laws and rules about copyright in government works). The automatic protection provided by Berne Convention not apply to law-documents: Article 2.4 excludes the official texts from the automatic protection. It is also possible to "inherit" the license from context. The set of country's law-documents is made available through national repositories. Examples of law-document open repositories: LexML Brazil, Legislation.gov.uk, N-Lex. In general, a law-document is offered in more than one (open) official version, but the main one is that published by a government gazette. So, law-documents can eventually inherit license expressed by the repository or by the gazette that contains it. Open content Open content describes any work that others can copy or modify freely by attributing to the original creator, but without needing to ask for permission. This has been applied to a range of formats, including textbooks, academic journals, films and music. The term was an expansion of the related concept of open-source software. Such content is said to be under an open license. History The concept of applying free software licenses to content was introduced by Michael Stutz, who in 1997 wrote the paper "Applying Copyleft to Non-Software Information" for the GNU Project. The term "open content" was coined by David A. Wiley in 1998 and evangelized via the Open Content Project, describing works licensed under the Open Content License (a non-free share-alike license, see 'Free content' below) and other works licensed under similar terms. It has since come to describe a broader class of content without conventional copyright restrictions. The openness of content can be assessed under the '5Rs Framework' based on the extent to which it can be reused, revised, remixed and redistributed by members of the public without violating copyright law. Unlike free content and content under open-source licenses, there is no clear threshold that a work must reach to qualify as 'open content'. Although open content has been described as a counterbalance to copyright, open content licenses rely on a copyright holder's power to license their work, as copyleft which also utilizes copyright for such a purpose. In 2003 Wiley announced that the Open Content Project has been succeeded by Creative Commons and their licenses, where he joined as "Director of Educational Licenses". In 2005, the Open Icecat project was launched, in which product information for e-commerce applications was created and published under the Open Content License. It was embraced by the tech sector, which was already quite open source minded. In 2006 the Creative Commons' successor project was the Definition of Free Cultural Works for free content, put forth by Erik Möller, Richard Stallman, Lawrence Lessig, Benjamin Mako Hill, Angela Beesley, and others. The Definition of Free Cultural Works is used by the Wikimedia Foundation. In 2008, the Attribution and Attribution-ShareAlike Creative Commons licenses were marked as "Approved for Free Cultural Works" among other licenses. Another successor project is the Open Knowledge Foundation, founded by Rufus Pollock in Cambridge, in 2004 as a global non-profit network to promote and share open content and data. In 2007 the gave an Open Knowledge Definition for "content such as music, films, books; data be it scientific, historical, geographic or otherwise; government and other administrative information". In October 2014 with version 2.0 Open Works and Open Licenses were defined and "open" is described as synonymous to the definitions of open/free in the Open Source Definition, the Free Software Definition and the Definition of Free Cultural Works. A distinct difference is the focus given to the public domain and that it focuses also on the accessibility (open access) and the readability (open formats). Among several conformant licenses, six are recommended, three own (Open Data Commons Public Domain Dedication and Licence, Open Data Commons Attribution License, Open Data Commons Open Database License) and the , , and Creative Commons licenses. "Open content" definition The website of the Open Content Project once defined open content as 'freely available for modification, use and redistribution under a license similar to those used by the open-source / free software community'. However, such a definition would exclude the Open Content License because that license forbids charging for content; a right required by free and open-source software licenses. The term since shifted in meaning. Open content is "licensed in a manner that provides users with free and perpetual permission to engage in the 5R activities." The 5Rs are put forward on the Open Content Project website as a framework for assessing the extent to which content is open: Retain – the right to make, own, and control copies of the content (e.g., download, duplicate, store, and manage) Reuse – the right to use the content in a wide range of ways (e.g., in a class, in a study group, on a website, in a video) Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language) Remix – the right to combine the original or revised content with other open content to create something new (e.g., incorporate the content into a mashup) Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend) This broader definition distinguishes open content from open-source software, since the latter must be available for commercial use by the public. However, it is similar to several definitions for open educational resources, which include resources under noncommercial and verbatim licenses. The later Open Definition by the Open Knowledge Foundation define open knowledge with open content and open data as sub-elements and draws heavily on the Open Source Definition; it preserves the limited sense of open content as free content, unifying both. Open access "Open access" refers to toll-free or gratis access to content, mainly published originally peer-reviewed scholarly journals. Some open access works are also licensed for reuse and redistribution (libre open access), which would qualify them as open content. Open content and education Over the past decade, open content has been used to develop alternative routes towards higher education. Traditional universities are expensive, and their tuition rates are increasing. Open content allows a free way of obtaining higher education that is "focused on collective knowledge and the sharing and reuse of learning and scholarly content." There are multiple projects and organizations that promote learning through open content, including OpenCourseWare, Khan Academy and the Saylor Academy. Some universities, like MIT, Yale, and Tufts are making their courses freely available on the internet. Textbooks The textbook industry is one of the educational industries in which open content can make the biggest impact. Traditional textbooks, aside from being expensive, can also be inconvenient and out of date, because of publishers' tendency to constantly print new editions. Open textbooks help to eliminate this problem, because they are online and thus easily updatable. Being openly licensed and online can be helpful to teachers, because it allows the textbook to be modified according to the teacher's unique curriculum. There are multiple organizations promoting the creation of openly licensed textbooks. Some of these organizations and projects include the University of Minnesota's Open Textbook Library, Connexions, OpenStax College, the Saylor Academy, Open Textbook Challenge and Wikibooks. Licenses According to the current definition of open content on the OpenContent website, any general, royalty-free copyright license would qualify as an open license because it 'provides users with the right to make more kinds of uses than those normally permitted under the law. These permissions are granted to users free of charge.' However, the narrower definition used in the Open Definition effectively limits open content to libre content, any free content license, defined by the Definition of Free Cultural Works, would qualify as an open content license. According to this narrower criteria, the following still-maintained licenses qualify: Creative Commons licenses (only Creative Commons Attribution, Attribution-Share Alike and Zero) Open Publication License (the original license of the Open Content Project, the Open Content License, did not permit for-profit copying of the licensed work and therefore does not qualify) Against DRM license GNU Free Documentation License (without invariant sections) Open Game License (designed for role-playing games by Wizards of the Coast) Free Art License See also Free software movement Freedom of information Free education Open publishing Open-source hardware Project Gutenberg Digital rights Open catalogue Open source Further reading OECD – Organisation for Economic Co-operation and Development: Giving Knowledge for free – The Emergence of Open Educational Resources. 2007, . Notes References Digital art Free culture movement Content
20694622
https://en.wikipedia.org/wiki/Computer%20User
Computer User
Computer User is a computer magazine that was founded in 1982, and which, after several owners and fundamental changes, is still in business today online as computeruser.com. It should not be confused with a magazine published in 1983-1984 by McPheeters, Wolfe & Jones that was also titled Computer User, but with the subtitle "For the Tandy/Radio Shack System". History In the beginning years of publicly popular computer use, Computer User was founded in Minneapolis, Minnesota as a free monthly magazine published by Computer User, Inc. a Minnesota corporation. Steven Bianucci, Publisher. Dale Archibald, Editor. Diane Teeters, Advertising Sales. Revenues were derived from advertising. Computer User took advantage of a tradition in the Twin Cities metropolitan area of placing free publication newsstands in business districts and stores. The magazine, printed originally in black and white with one spot color on newsprint, proved immediately popular with distribution, eventually hitting many hundreds of sites and a circulation around 25,000 in the Twin Cities for a full color piece on newsprint paper. It was then still free to pick up but could be subscribed for mail distribution for $34.95 per year. Computer User won numerous awards such as this from 2001 or as now listed on the current publishers website awards. Computer User became franchised to 18 metropolitan U.S. markets wherein Computer User provided content and the local publisher provided advertising and some local content. At some point, based on a reference from the University of Minnesota Library the publisher was Computer User Publications, Inc. Computer User was sold to M. S. P. Publications - a very successful Minneapolis-based magazine publisher. MSP published Computer User until 2004, when paper publication ceased and the enterprise and web based name rights were sold to ComputerUser, Inc. a New York State registered company. Content Computer User's style was, from the beginning, focused on the user of the then new micro computers. Nearly all other publications national and local focused on more the more technical aspects of having a computer, many of which were homebrew, assembled in part or in whole by the user. Computer User articles might include personal experiences, advice on how to use software, and expert advice columns. In the first decade before the Internet a very popular feature was a complete local list of computer bulletin boards. The monthly index of these, which frequently changed because they were usually based on someone's home computer, included the bbs name, a telephone number one could dialup with your computer modem, a list of the modem speed(s) offered (not all were as exists today. The most common speed available to home users then was 300 baud or "bits per second."), and a summary of the topics or interests of those using the bulletin board. It was, in the 1980s and 1990s a form of social networking and a very important reason many people read the magazine. The magazine also published an equally popular index of Computer User Groups and their meeting schedules. Some were generic but others were focused on a single type of computer such as Tandy or Commodore. The magazine list enabled one to go to a meeting and ask other enthusiasts how to do things in a world where there was no other source. Therefore the magazine often published articles aimed at the groups and how to use them. References External links Current Issue Monthly magazines published in the United States Online magazines published in the United States Defunct computer magazines published in the United States Free magazines Magazines established in 1983 Magazines disestablished in 2004 Magazines published in Minnesota Mass media in Minneapolis–Saint Paul Online magazines with defunct print editions Online computer magazines
45112245
https://en.wikipedia.org/wiki/WampServer
WampServer
WampServer refers to a solution stack for the Microsoft Windows operating system, created by Romain Bourdon and consisting of the Apache web server, OpenSSL for SSL support, MySQL database and PHP programming language. Notable lists, variants, and equivalents on other platforms LAMP: for the Linux operating system (The original AMP stack - explained here.) MAMP: for the macOS operating system SAMP: for Solaris operating system WIMP: A similar package where the Apache is replaced by Internet Information Services (IIS) WISA: solution stack for Windows (operating system), consisting of Internet Information Services, Microsoft SQL Server, and ASP.NET XAMPP: A cross-platform web server solution stack package. See also Comparison of web frameworks List of AMP packages References External links Web server software Website management WAMP de:LAMP#Varianten
36699343
https://en.wikipedia.org/wiki/OProfile
OProfile
In computing, OProfile is a system-wide statistical profiling tool for Linux. John Levon wrote it in 2001 for Linux kernel version 2.4 after his M.Sc. project; it consists of a kernel module, a user-space daemon and several user-space tools. Details OProfile can profile an entire system or its parts, from interrupt routines or drivers, to user-space processes. It has low overhead. The most widely supported kernel mode of uses a system timer (See: Gathering profiling events). However, this mode is unable to measure kernel functions where interrupts are disabled. Newer CPU models support a hardware performance counter mode which uses hardware logic to record events without any active code needed. In Linux 2.2/2.4 only 32-bit x86 and IA64 are supported; in Linux 2.6 there is wider support: x86 (32 and 64 bit), DEC Alpha, MIPS, ARM, sparc64, ppc64, AVR32. Call graphs are supported only on x86 and ARM. In 2012 two IBM engineers recognized OProfile as one of the two most commonly used performance counter monitor profiling tools on Linux, alongside perf tool. In 2021, OProfile is set to be removed from version 5.12 of the Linux kernel, with the user-space tools continuing to work by using the kernel's perf system. User-space tools is used to start and stop the daemon, which collects profiling data. This data is periodically saved to the directory. shows basic profiling data. can produce annotated sources or assembly. converts from oprofile data into gprof-compatible format. Example: See also List of performance analysis tools References External links W. Cohen, Tuning programs with OProfile // Wide Open Magazine, 2004, pages 53–62 Prasanna Panchamukhi, Smashing performance with OProfile. Identifying performance bottlenecks in real-world systems // IBM DeveloperWorks, Technical Library, 16 Oct 2003 Justin Thiel, An Overview of Software Performance Analysis Tools and Techniques: From GProf to DTrace, (2006) "2.2.2 Overview of Oprofile" Linux kernel features Profilers
12931832
https://en.wikipedia.org/wiki/Asunder%20%28software%29
Asunder (software)
Asunder is a free and open-source graphical (GTK 2) audio CD ripper program for Unix-like systems. It doesn't have dependencies to the GNOME libraries (GStreamer and dconf) or libraries of other desktop environments. It functions as a front-end for cdparanoia. Its first version was released in January 2005. Asunder is free software released under the GNU General Public License version 2. Features Saves audio tracks as WAV, MP3, Vorbis, FLAC, Opus, WavPack, Musepack, AAC, or Monkey's Audio files Uses CDDB protocol to name and tag each track (freedb.freac.org) is the changeable default source) Creates M3U playlists Encodes to multiple formats in one session Simultaneously rips and encodes Can encode to multiple formats in one session Allows for each track to be tagged by a different artist Does not require a specific desktop environment (just GTK) See also Sound Juicer – The official CD ripper of GNOME Other CD rippers for Linux References External links Audio software that uses GTK Free audio software Free software programmed in C Linux CD ripping software Optical disc-related software that uses GTK
2262699
https://en.wikipedia.org/wiki/Sun%20Fire
Sun Fire
Sun Fire is a series of server computers introduced in 2001 by Sun Microsystems (since 2010, part of Oracle Corporation). The Sun Fire branding coincided with the introduction of the UltraSPARC III processor, superseding the UltraSPARC II-based Sun Enterprise series. In 2003, Sun broadened the Sun Fire brand, introducing Sun Fire servers using the Intel Xeon processor. In 2004, these early Intel Xeon models were superseded by models powered by AMD Opteron processors. Also in 2004, Sun introduced Sun Fire servers powered by the UltraSPARC IV dual-core processor. In 2007, Sun again introduced Intel Xeon Sun Fire servers, while continuing to offer the AMD Opteron versions as well. SPARC-based Sun Fire systems were produced until 2010, while x86-64 based machines were marketed until mid-2012. In mid-2012, Oracle Corporation ceased to use the Sun Fire brand for new server models. Operating systems UltraSPARC-based Sun Fire models are licensed to run the Solaris operating system versions 8, 9, and 10. Although not officially supported, some Linux versions are also available from third parties, as well as OpenBSD and NetBSD. Intel Xeon and AMD Opteron based Sun Fire servers support Solaris 9 and 10, OpenBSD, Red Hat Enterprise Linux versions 3 - 6, SUSE Linux Enterprise Server 10 and 11, Windows 2000, Windows Server 2003, 2008, and 2008 R2 . Model nomenclature Later Sun Fire model numbers have prefixes indicating the type of system, thus: V: entry level and mid-range rackmount and cabinet servers (UltraSPARC, IA-32 or AMD64) E: high-end enterprise class cabinet servers with high-availability features (UltraSPARC) B: blade servers (UltraSPARC or IA-32) X: rackmount x86-64 based servers T: entry level and mid-range rackmount servers based on UltraSPARC T-series CoolThreads processors When Sun offered Intel Xeon and AMD Opteron Sun Fire servers under the V-Series sub brand, Sun used an x suffix to denote Intel Xeon processor based systems and a z suffix for AMD Opteron processor based systems, but this convention was later dropped. The z suffix was also used previously to differentiate the V880z Visualization Server variant of the V880 server. Sun's first-generation blade server platform, the Sun Fire B1600 chassis and associated blade servers, was branded under the Sun Fire server brand. Later Sun blade systems were sold under the Sun Blade brand. In 2007, Sun, Fujitsu and Fujitsu Siemens introduced the common SPARC Enterprise brand for server products. The first SPARC Enterprise models were the Fujitsu-developed successors to the midrange and high-end Sun Fire E-series. In addition, the Sun Fire T1000 and T2000 servers were rebranded as the SPARC Enterprise T1000 and T2000 and sold under the Fujitsu brands, although Sun continued to offer these with their original names. Later T-series servers have also been badged SPARC Enterprise rather than Sun Fire. Since late 2010, Oracle Corporation no longer uses Sun Fire brand for their current T series SPARC servers, and since mid-2012 for new X series x86-64 machines based on Intel Xeon CPUs. x86-64 server models which had been developed by Sun Microsystems before its acquisition, and were still in production, have all been rebranded as Sun Server X-series. Sun Fire model range Some servers were produced in two versions, the original version and a later RoHS version. Since a general maintenance and upgrade guideline is that RoHS components and spares may be installed into the original non-RoHS versions of that server, the end-of-life (EOL) date of a server is deemed the EOL date of the RoHS version of that server in this listing. UltraSPARC architecture x86/x64 architecture Sun Server / Oracle Server As of 2012, the x86 server range continued under the "Sun Server" or "Oracle Server" names. See also Fireplane References Sun System Handbook, Version 2.1.8, April 2005 External links Sun Servers, Sun Microsystems OpenBSD's Sunfire support Oracle - Entry-Level Servers - Legacy Product Documentation Oracle - Midrange Servers - Legacy Product Documentation Oracle - x86 Servers - Legacy Product Documentation Oracle - High-End Servers - Legacy Product Documentation Fire Oracle hardware Computer-related introductions in 2001 SPARC microprocessor products
3112330
https://en.wikipedia.org/wiki/WVTC
WVTC
WVTC is the radio station of Vermont Technical College, operating on a 90.7 MHz FM carrier with an effective power of 300 watts. The station is located in Morey Hall on the Randolph Center campus. WVTC is operated and maintained by the students of VTC through the Radio Club, and is financially supported by VTC Student Council. History WVTC began in 1963 as a small AM station on 640 kHz with a 2-watt transmitter in the Old Dorm building. It is unclear how the antenna system worked; there is speculation that it may have been a carrier current system. The 1965 VTC yearbook distinguished between the Radio Club and the Radio Station WVTC-AM 640. In 1966 a new dorm building, Morey Hall, was built on the Randolph Center campus, including provisions for a campus radio station, and the radio station moved into its new home at Morey Hall the following year. In 1968 the radio station received a license from the FCC to broadcast as WVTC-FM at 90.7 MHz and 10 watts. The station prospered during the 1970s, with the involvement of Howard Ginsberg, who later founded WXXX, a successful commercial Top 40 station in Burlington, Vermont. In the 1980s, the station was granted a 300-watt license. In the 1990s, WVTC migrated to CD technology. From April to November 1995 the station was off the air due to a failed EBS receiver and other potential violations. Many upgrades were performed the following year, including new studio and broadcasting equipment. The station experimented with station automation, using multi-CD changers, controlled via serial connection. A new transmitter was installed and the tower erected on December 26, 1996. The station began internet webcasting in February 1997, streaming both music and webcam images, the first on-line radio streaming station in Northern New England, sending a copy of on-air audio over MP3 format at 16kbit/s using custom software on a Pentium 60. MP3s were introduced as an on-air medium the same year. The first mp3 played on the station was "Stupid Girl" by Garbage. In 1998 the WebDJ automation system went live and SCA technology was explored, with data being sent about weather and news events. The same year WVTC broadcast 24/7 for the first time, and ranked in the broadcast ratings for Central Vermont. Technological innovations of 1998 included an upgrade to a 32kbit/s stream using a computer club purchased Linux server with two processors called "Halftime", as well as the creation of a remote control interface for the Winamp application. Station staff created an improved FM radio card driver to Linux and were added to the Linux Kernel. In March 2000, DJ "Disco" Vince Giffin set a world record for the longest time for a single DJ on the air, at 73 hours. In 2001 MP3 music replaced CDs as the station's primary audio storage format. In 2006 the station's survival was jeopardised by low membership, and went off the air due to transmitter problems during the summer, returning after repairs in the fall. The following Spring a small group of students banded together to prevent the school administration from shutting down WVTC. Some hardware and software upgrades were performed in the spring and a couple of regular shows were broadcast. In Fall 2007 the station was forced to shut down when their FCC license failed to be renewed by the college. Shortly afterwards, the college was approached by Vermont Public Radio with proposals to lease the station, which was not accepted. The station resumed transmission in Fall 2008, but went offline again before the end of the year after repeated power failures damaged their equipment. The equipment was eventually replaced, and transmission resumed in Fall 2009. In Spring 2011 the station filed a Consent Decree with the FCC, and returned to being a fully licensed station after a Final Consent Decree Inspection in Fall 2013. In Fall 2015 the internet stream was upgraded to 320kpbs. Gallery External links WVTC website VTC Radio stations established in 1961 1961 establishments in Vermont VTC
2095370
https://en.wikipedia.org/wiki/Enemy%20Territory%3A%20Quake%20Wars
Enemy Territory: Quake Wars
Enemy Territory: Quake Wars is a first-person shooter video game developed by Splash Damage and published by Activision for Microsoft Windows, Linux, Mac OS X, PlayStation 3 and Xbox 360. The game was first released in the PAL region on September 28, 2007, and later in North America on October 2. It is a spinoff of the Quake series and the successor to 2003's Wolfenstein: Enemy Territory. Quake Wars is a prequel set in the same universe as Quake II and Quake 4. New features include the addition of controllable vehicles and aircraft as well as multiple AI deployables, asymmetric teams, much bigger maps and the option of computer-controlled bot opponents. Unlike the previous Enemy Territory games, Quake Wars is a commercial release rather than a free download. Enemy Territory: Quake Wars received mostly positive reviews upon release, with a more mixed reception for the console versions. Since 2011, the rights to the game have reverted to ZeniMax Media. Gameplay Quake Wars is a class-based, objective focused, team-oriented game. Teams are based on human (GDF) and alien (Strogg) technology. While the teams are asymmetrical, both sides have the same basic weapons and tools to complete objectives. Unlike other team-based online games (such as the Battlefield series), the gameplay is much more focused on one or two main objectives at once, rather than spread all over the combat area. This allows for much more focused and intense combat situations, similar to the original Unreal Tournament assault mode. Each player class normally has new objectives show up during game play, many times based around the specific capabilities of that class. The game also has the capability to group players into fireteams for greater coordinated strategy. These fireteams can be user created or game generated depending on the mission selected by the player. The game has an experience points (XP) rewards system in place, which rewards every player some points depending on the mission completed. This accumulated XP later leads to unlocks which may vary from availability of new equipment/weapons to abilities like faster movement or more accurate weapons. These rewards are reset to zero after the completion of every campaign, which consists of three unique maps, all with a common locale/region. Development Enemy Territory: Quake Wars was first announced through a press release on May 16, 2005. The public beta opened to FilePlanet paid subscribers on June 20, 2007, and to nonpaying members on three days later. There were also beta keys given out for a limited time exclusively at QuakeCon 2007. The public beta ended on September 25. A second build of the beta was released on August 3. It features a new map entitled Valley to replace Sewer and several changes to the game code to improve performance and implement new features. This map was featured in tutorial videos released prior to the beta, and was the map made available to play at QuakeCon 2006. The Valley map is based on a real Earth location: Yosemite Valley. A PC demo for Windows was released on September 10, 2007, and for Linux on October 16, also featuring the map Valley. The full Linux version was released on October 19. A Mac OS X client has also been released. The final retail version was first released on September 28, 2007, for Windows. The initial Linux release, created by id Software employee Timothee Besset, was made available three weeks later on October 19. As of 2019, Quake Wars is the most recent id Software game to have received a Linux release. MegaTexture Quake Wars utilized a modified version of the id Tech 4 engine with the addition of a technology called MegaTexture, a new texture mapping technique developed by John D. Carmack of id Software. The technology allows maps to be totally unique, without any repeated terrain tiles. Battlefields can be rendered to the horizon without any fogging, with over a square mile of terrain at inch-level detail, while also providing terrain-type detail that defines such factors as bullet hit effects, vehicle traction, sound effects, and so on. Each megatexture is derived from a 32768×32768 pixel image, which takes up around 3 gigabytes in its raw form (with 3 bytes per pixel, one byte for each color channel). Marketing A collector's edition of the game was released exclusively for Microsoft Windows on October 2, 2007, in North America and September 9 in Australia and Europe (in Europe the collector's edition was released as Premium Edition). The collector's edition features the game itself, 10 collectible cards (there are 12 cards, but the first two are only available via preorder) and a bonus disc, which contains concept art, HD videos, interviews, artwork, downloadable icons, ringtones and music tracks. Reception On the review aggregator GameRankings, the PC version of the game had an average score of 84% based on 61 reviews. On Metacritic, the game had a score of 84 out of 100 based on 55 reviews. Kevin VanOrd of GameSpot gave the game a rating of 8.5/10. Other reviews are generally very positive, scoring Quake Wars in the 8–9 (out of 10) range. For the week ending September 29, 2007, Quake Wars was the best selling PC title in the United Kingdom according to the Entertainment and Leisure Software Publishers Association. On October 17, 2007, after its top of the charts sale in the United Kingdom, Quake Wars debuted at a familiar #1 spot yet again in the United States. According to NPD group's top 10 best selling PC game charts, it managed to take the #1 spot. Xbox 360 and PlayStation 3 reviews for the game were generally much less positive, with IGN giving the 360 version 6.1 and the PlayStation 3 version 5.3, citing game issues and inferior graphics to the PC version as causes for the lower score. References Notes External links 2007 video games Activision games First-person shooters First-person strategy video games Linux games MacOS games PlayStation 3 games Quake (series) Video games developed in the United Kingdom Video games developed in the United States Video games scored by Bill Brown Windows games Xbox 360 games Video game prequels Video game spin-offs Id Software games Multiplayer and single-player video games Splash Damage games Aspyr games Id Tech games Alien invasions in video games Science fiction video games
286578
https://en.wikipedia.org/wiki/Heinz%20College
Heinz College
The Heinz College of Information Systems and Public Policy (Heinz College) at Carnegie Mellon University in Pittsburgh, Pennsylvania, United States is a private graduate college that consists of one of the nation's top-ranked public policy schools—the Network of Schools of Public Policy, Affairs, and Administration-accredited School of Public Policy & Management—and information schools—the School of Information Systems & Management. It is named for the late United States Senator H. John Heinz III (1938-1991) from Pennsylvania. The Heinz College is also a member of the Institute for Information Infrastructure Protection, one of 24 members of the iCaucus leadership of iSchools, and a founding member of the MetroLab Network, a national smart city initiative and New America's Public Interest Technology University Network. The Heinz College educational process integrates policy analysis, management, and information technology. Coursework emphasizes the applied and interdisciplinary fields of empirical methods and statistics, economics, information systems and technology, operations research, and organizational behavior. In addition to full-time, on campus programs in Pittsburgh, Washington, DC, Los Angeles, and Adelaide, the Heinz College offers graduate-level programs to non-traditional students through part-time on-campus and distance programs, customized programs, and executive education programs for senior managers. History Richard King Mellon and his wife Constance had long been interested in urban and social issues. In 1965, they sponsored a conference on urban problems, in which they began discussions with the University of Pittsburgh and Carnegie Mellon University to create a school focused on public affairs. In 1967, Carnegie Mellon President H. Guyford Stever, Richard M. Cyert, Dean of the Tepper School of Business, and Professors William W. Cooper and Otto Davis met and formed a university-wide committee to discuss creating a school that would train leaders to address complex problems in American urban communities. Davis was asked to draft a proposal to create such a school and focused on applying the Tepper School of Business' pioneering quantitative and skill-based approach to management education as well as technology to public sector problems. In 1968, William Cooper and Otto Davis presented the final proposal for the School of Urban and Public Affairs (SUPA) to the Richard King Mellon Foundation. The proposal found favor with R. K. Mellon and he became strongly committed to creating such a school. The R. K. Mellon Foundation sent a proposal to President Stever to finance it with an initial grant of $10 million, and on 1 November 1968, President Stever created the School of Urban and Public Affairs with William Cooper as the first dean. The school initially drew much of its faculty from the Tepper School of Business and was based in the Margaret Morrison Carnegie Hall. Eventually, the school became independent of other colleges within the university and moved to its current location in historic Hamburg Hall when the facility was acquired by the university from the U.S. Bureau of Mines. Subsequent Deans include Otto Davis, Brian Berry, Joel A. Tarr, Alfred Blumstein, former Carnegie Mellon Provost Mark Kamlet, Linda C. Babcock, Jeffrey Hunker, Mark Wessel, and current dean Ramayya Krishnan. In 1992, Teresa Heinz donated a large sum of money to the school, which was then renamed as the H. John Heinz III School of Public Policy and Management in honor of Mrs. Heinz's late husband, Senator H. John Heinz III. Senator Heinz, heir to the H. J. Heinz Company fortune, had been killed when his small private plane crashed one year before. In 2007, the Heinz School received a grant from the Heinz Foundations that transformed the school into a college and formalized the School of Information Systems & Management alongside the School of Public Policy & Management under the college's administration. The official launch of the H. John Heinz III College of Information Systems and Public Policy was held on October 24, 2008 during Carnegie Mellon's Homecoming weekend and was led by Dean Krishnan, Teresa Heinz, and former United States Secretary of the Treasury Paul O'Neill. Facilities Heinz College is headquartered in Hamburg Hall, a building listed on the National Register of Historic Places and designed by noted Beaux-Arts architect Henry Hornbostel. Hamburg Hall is named for Lester A. Hamburg, an industrialist and philanthropist active in the Pittsburgh Jewish Community. The Heinz College also has a branch campus in Adelaide, South Australia, which offers master's degrees in Public Policy and Management and Information Technology. Heinz College also maintains a North Hollywood center in Los Angeles as part of the jointly administered master's degree program in Entertainment Industry Management, and a center in Washington, DC on Capitol Hill for students in the Public Policy and Management masters program. Carnegie Mellon is in the process of renovating and expanding the Heinz College's Pittsburgh facilities through a four-phased process across Forbes Avenue from the 2013-announced Tepper Quadrangle. The ultimate plan for Hamburg Hall is to capture new space – approximately 20,000 square feet – by enclosing the courtyard between the rotunda, the East and West Wings, and the adjacent Elliott Dunlap Smith Hall with a soaring glass roof structure. This new space will include a large, multi‐purpose Classroom of the Future, lounges, meeting/study space, and a café. Phase I of renovations and expansion of Hamburg Hall was entirely financed by Heinz College and was completed in September 2013. Heinz College students immediately benefited from convenient access to the new student services and computing services suites. The construction of new career services interview rooms provides up‐to‐date facilities for on‐campus recruiters. A December 2013 gift from The Heinz Endowments combined with gift commitments from other donors enabled the Heinz College to expedite the final architectural design of Phase II elements, finalize necessary construction planning, commence renovations and expansion, and complete a structure that will add additional value to the college. A new 150-seat auditorium in the courtyard between Hamburg Hall and Smith Hall was constructed, and both levels of the rotunda were transformed into student study and lounge spaces as well as a grand entrance and lobby area, and renamed as the Teresa Heinz Rotunda. The new auditorium allows the college to host high-profile speakers. Further, the west wing of Hamburg Hall now consists of forward-looking classrooms in the space that was vacated by the Engineering Research Accelerator when it moved to the newly constructed Scott Hall. An additional entrance from Forbes Avenue was also constructed. During Phase III the addition to Hamburg Hall, including a glass roof, end walls, café, and study space, will be constructed. Fire protection and elevator improvements will also be addressed as well as the addition of new classrooms (including designated executive education rooms). The addition of 20,000 square feet to Hamburg Hall will allow the Heinz College to continue to grow student enrollment. This phase is planned for completion by 2017. The final phase, Phase IV, will renovate third-floor faculty and PhD offices and meeting spaces. The new additions and renovations will be designed to achieve LEED Silver certification. Rankings In the 2019 U.S. News & World Report Graduate School rankings, the Heinz College was ranked 14th among schools of public affairs. Of the 285 schools of public affairs across the nation that were surveyed for 2019, Heinz College ranked: 1st in Information and Technology Management; 5th in Environmental Policy and Management; 6th in Public Policy Analysis; 14th in Urban Policy; 16th in Health Policy and Management; 19th in Social Policy; 27th in Public Finance and Budgeting; and 32nd in Public Management and Leadership. Heinz College also ranked 2nd in the Faculty Scholarly Productivity Index listing for the top performing programs in public administration and 9th in the listing for the top performing programs in public policy. The PhD program in Public Policy and Management at the Heinz College was ranked in the top 5 overall and in the top 3 in faculty research activity by the National Research Council in 2010. The Medical Management program was ranked 4th by Modern Healthcare Magazine in the 2009 rankings of the top management graduate schools for physician executives. InformationWeek named the Heinz College's Master in Information Systems Management with Business Intelligence & Data Analytics concentration as one of the top 20 in big data analytics. The Heinz College was awarded the 2016 UPS George D. Smith Prize by the Institute for Operations Research and the Management Sciences (INFORMS). The Smith Prize recognizes the best academic departments and schools in analytics, management science, and operations research. Education The Heinz College has the following schools: School of Public Policy & Management Master of Science in Public Policy & Management (MSPPM; full-time). Tracks include: Accelerated 3-Semester Track (full-time); geared at incoming students with 3 or more years of relevant experience Data Analytics (MSPPM-DA; full-time); focuses on quantitative data analytics Global (full-time); first year in Adelaide, Australia, second year in Pittsburgh, PA. (See Heinz College Australia) Washington, D.C. (MSPPM-DC; full-time); first year in Pittsburgh, PA, second year in Washington, D.C. completing classes and an apprenticeship Master of Science in Health Care Policy & Management (MSHCPM; full-time) Master of Science in Health Care Analytics & Information Technology (Full-time) Master of Public Management (Part-time) Master of Medical Management (Part-time) School of Information Systems & Management Bachelor of Science in Information Systems (jointly with the Dietrich College of Humanities and Social Sciences) Master of Information System Management (MISM; full-time) Master of Science in Information Security Policy & Management (Full-time) Master of Science in Information Technology (Part-time) Joint degree programs with the Carnegie Mellon College of Fine Arts Master of Arts Management (MAM; full-time) Master of Entertainment Industry Management (MEIM; full-time) PhD programs: Public Policy & Management Information Systems & Management Economics & Public Policy (jointly with the Tepper School of Business) Statistics and Public Policy (jointly with Department of Statistics and Data Science) Strategy, Entrepreneurship, & Technological Change (jointly with three other departments) Technological Change & Entrepreneurship (Carnegie Mellon Portugal program) Machine Learning & Public Policy (jointly with the Machine Learning Department) Notable associated people Nilofar Bakhtiar - Pakistani Senator and former Federal Minister for Tourism Linda C. Babcock - former Dean, behavioral economist, and expert on the gender pay gap Allen Biehler - former Secretary of the Pennsylvania Department of Transportation Keith Block - Co-CEO of Salesforce.com Alfred Blumstein - one of the world's top criminologists and operations researchers, winner of the 2007 Stockholm Prize in Criminology, member of the National Academy of Engineering, INFORMS Fellow and past president, director of the National Consortium on Violence Research Nik Bonaddio - founder of numberFire Kathleen Carley - computational sociologist and expert in dynamic network analysis Jonathan Caulkins - Operations researcher and drug policy expert, INFORMS fellow, founder of the Pittsburgh branch of the RAND Corporation Jack Chow - Public health expert, first Assistant Director-General of the World Health Organization on HIV/AIDS, Tuberculosis, and Malaria, Special Representative of the U.S. Secretary of State on Global HIV/AIDS and Deputy Assistant Secretary of State for Health and Science William W. Cooper - founding Dean of Heinz College and pioneer in management science and accounting, INFORMS Fellow and past president, John von Neumann Theory Prize winner, and member of the Accounting Hall of Fame John Patrick Crecine - former President of the Georgia Institute of Technology, former Dean of the Gerald R. Ford School of Public Policy, former Dean of the Dietrich College of Humanities and Social Sciences Carmen Yulín Cruz - current mayor of San Juan, Puerto Rico David Dausey - public health expert and consultant for the RAND Corporation Jon Delano - Money & Politics editor at KDKA-TV David Farber - co-creator of ARPANET and former Chief Technologist for the Federal Communications Commission (FCC) Stephen Fienberg - renowned statistician and member of the National Academy of Sciences Richard Florida - social economist, urban scientist, and creator of the Creative class concept Anthony Foxx - former United States Secretary of Transportation Rayid Ghani - Chief Scientist Obama for America campaign John Graham - Dean of the Indiana University School of Public and Environmental Affairs, former Dean of the Frederick S. Pardee RAND Graduate School, and former Administrator of the Office of Information and Regulatory Affairs Jendayi Frazer - US Assistant Secretary of State for African Affairs in the George W. Bush administration Melvin J. Hinich - expert in signal processing and statistics Jeffrey Hunker - expert in information security policy, advisor in the United States Department of Commerce, founding director of the Critical Infrastructure Assurance Office, Senior Director for Critical Infrastructure on the National Security Council Farnam Jahanian - President and former Provost of Carnegie Mellon University and former Director of the National Science Foundation Directorate for Computer and Information Science and Engineering Sydney Kamlager - District Director for California State Senator Holly Mitchell and Trustee-Elect for the Los Angeles Community College District David Krackhardt - expert in organizational behavior and social network analysis Ramayya Krishnan - Dean and expert in management science and information technology, strategy, and policy, INFORMS Fellow and President-elect Yeh Kuang-shih - Minister of Transportation and Communication of Taiwan Susie Lee - United States Representative for Nevada's 3rd District Charles F. Manski, Economist and econometrician in the realm of rational choice theory, an innovator known for his work on partial identification. Dan J. Martin - Dean of the Carnegie Mellon College of Fine Arts J. Kevin McMahon - President and CEO of the Pittsburgh Cultural Trust David H. McCormick - former Under Secretary for International Affairs within the US Department of the Treasury and President of Bridgewater Associates Sarah E. Mendelson - former United States Ambassador to the United Nations Economic and Social Council Daniel S. Nagin - criminologist, winner of the 2014 Stockholm Prize in Criminology, and fellow of the American Academy of Political and Social Science Jairam Ramesh - elected member of the Indian Parliament and the Cabinet Minister for Rural Development Mark Roosevelt - President of Antioch College, Democratic candidate for Governor of Massachusetts, superintendent of the Pittsburgh Public Schools, and member of the Roosevelt family Denise M. Rousseau - expert in organizational behavior and the psychological contract Joe Sestak - United States Congressman from Pennsylvania from 2007 to 2011, former United States Navy Vice Admiral Peter M. Shane - Professor of Law and Public Policy specializing in administrative law and e-democracy, former Dean of the University of Pittsburgh School of Law Kiron Skinner - United States Department of State Director of Policy Planning, expert and author in international relations, Cold War policy, and fellow at the Hoover Institution Luke Skurman - founder of Niche Michael D. Smith - economist in information technology and pioneer in The Long Tail phenomenon Robert P. Strauss - economist and expert in public finance and tax policy Subra Suresh - Former president of Carnegie Mellon University and former Director of the National Science Foundation John Tarnoff - studio executive, film and interactive producer, and technology entrepreneur and former Head of Show Development at DreamWorks Animation Irene Tinagli - member of the European Parliament and former member of the Italian Parliament Paula Wagner - film executive and talent agent, former CEO at United Artists and Cruise/Wagner Productions Robert Wilburn - former president of the Carnegie Museums of Pittsburgh, Indiana University of Pennsylvania, and director of Heinz College in Washington, DC See also Heinz College Australia, Heinz College's branch campus in Adelaide, South Australia References External links Schools and departments of Carnegie Mellon Educational institutions established in 1968 Public administration schools in the United States Public policy schools Information schools 1968 establishments in Pennsylvania
51898759
https://en.wikipedia.org/wiki/ABC%20Chinese%E2%80%93English%20Dictionary
ABC Chinese–English Dictionary
The ABC Chinese–English Dictionary or ABC Dictionary (1996), compiled under the chief editorship of John DeFrancis, is the first Chinese dictionary to collate entries in single-sort alphabetical order of pinyin romanization, and a landmark in the history of Chinese lexicography. It was also the first publication in the University of Hawai'i Press's "ABC" (Alphabetically Based Computerized) series of Chinese dictionaries. They republished the ABC Chinese–English Dictionary in a pocket edition (1999) and desktop reference edition (2000), as well as the expanded ABC Chinese–English Comprehensive Dictionary (2003), and dual ABC Chinese–English/English-Chinese Dictionary (2010). Furthermore, the ABC Dictionary databases have been developed into computer applications such as Wenlin Software for learning Chinese (1997). History John DeFrancis (1911–2009) was an influential American sinologist, author of Chinese language textbooks, lexicographer of Chinese dictionaries, and Professor Emeritus of Chinese Studies at the University of Hawaii at Mānoa. After he retired from teaching in 1976, DeFrancis was a prolific author of influential works such as The Chinese Language: Fact and Fantasy (1984) and Visible Speech: The Diverse Oneness of Writing Systems (1989). Victor H. Mair, a sinologist and professor of Chinese at the University of Pennsylvania, first proposed the idea of a computerized pinyin Chinese–English dictionary in his 1986 lexicographical review article. He defined "alphabetically arranged dictionary" to mean a dictionary in which all words (cí 詞) are "interfiled strictly according to pronunciation. This may be referred to as a "single sort/tier/layer alphabetical" order or series." He emphatically does not mean a usual Chinese dictionary collated according to the initial single graphs (zì 字) that are only the beginning syllables of whole words. "With the latter type of arrangement, more than one sort is required to locate a given term. The head character must first be found and then a separate sort is required for the next character, and so on." Mair's article had two purposes, to call the attention of his colleagues to the critical need for an alphabetically arranged Chinese dictionary and to enlist their help in making it a reality, and to suggest that all new sinological reference tools should at least include alphabetically ordered indices. "Someone who already knows the pronunciation of a given expression but not its meaning should not be cruelly burdened by having to fuss with radicals, corners, strokes, and what not. Let him go directly to the object of his search instead of having to make endless, insufferable detours in an impenetrable forest of graphs." In DeFrancis' Acknowledgements, he says "This dictionary owes its genesis to the initiative of Victor H. Mair", who after unsuccessful attempts to obtain financial support for the compilation of an alphabetically based Chinese–English dictionary, in 1990 organized an international group of scholars who volunteered to contribute towards compiling it. However, "agonizingly slow progress" made it apparent that a fulltime editor was necessary, and in May 1992 John DeFrancis offered to undertake the project centered at the University of Hawai'i. Along with Prof. DeFrancis overseeing the general planning and supervision of the project as well as its detailed operations, a volunteer team of some 50 contributors – including academics, Chinese language teachers, students, lexicographers, and computer consultants – were involved in the myriad tasks of processing dictionary entries, such as defining, inputting, checking, and proofreading. The University of Hawai'i Press published the ABC Chinese–English Dictionary in September 1996. UHP republished the original paperback ABC Chinese–English Dictionary, which had a total 916 pages and was 23 cm. high, into the ABC Chinese–English Dictionary: Pocket Edition (1999, 16 cm.) and hardback ABC Chinese–English Dictionary: Desk Reference Edition (2000, 23 cm.). In Shanghai, DeFrancis' dictionary was published under the title Han-Ying Cidian: ABC Chinese–English Dictionary (Hanyu Dacidian Chubanshe, 1997). For reasons of political correctness, the Shanghai edition amended the entry for Lin Biao. It altered the original American edition's "veteran Communist military leader and Mao Zedong's designated successor until mysterious death" to "veteran Communist military leader; ringleader of counterrevolutionary group (during the Cultural Revolution)". Victor H. Mair became general editor of the ABC Chinese Dictionary Series in 1996, and the University of Hawai'i Press has issued ten publications (as of October 2016), including two developments from the ABC Chinese–English Dictionary (1996) with 71,486 head entries. John DeFrancis and others edited the hardback ABC Chinese–English Comprehensive Dictionary,(2003, 1464 pp., 25 cm.), which contains over 196,501 head entries, making it the most comprehensive one-volume dictionary of Chinese. DeFrancis (posthumously) and Zhang Yanyin, professor of Applied Linguistics and Educational Linguistics at the University of Canberra, edited the bidirectional paperback ABC Chinese–English/English-Chinese Dictionary (2010, 1240 pp., 19 cm.). It contains 67,633 entries: 29,670 in the English-Chinese section, 37,963 in the Chinese–English section, which is an abridgment of the ABC Chinese–English Comprehensive Dictionary and includes improvements such as more usage example sentences. Computers (namely, the C in ABC Dictionary) were purposefully involved in almost every stage of dictionary compilation and publication in order to facilitate further advances in electronic lexicography and software development. In 1997, the Wenlin Institute published Wenlin Software for Learning Chinese with about 14,000 head entries (version 1.0) and entered into a licensing agreement with the University of Hawaii to utilize the ABC Dictionary database in Wenlin software. The first edition ABC Chinese–English Dictionary (1996) was incorporated into Wenlin 2.0 with over 74,000 entries (1998); the second ABC Chinese–English Comprehensive Dictionary (2003) went into Wenlin 3.0 with over 196,000 entries (2002); and the third edition ABC English-Chinese/Chinese–English Dictionary (2010) was incorporated into Wenlin 4.0 (2011), which includes 300,000 Chinese–English entries, 73,000 Chinese character entries, and 62,000 English-Chinese entries. Prior to the alphabetically arranged ABC Chinese–English Dictionary, virtually every Chinese dictionary was based upon character head entries, arranged either by character shape or pronunciation, that subsume words and phrases written with that head character as the first syllable. While pronunciation determines the placement of words within the unconventional ABC Dictionary, Chinese characters still determine the position of words within a standard dictionary. Comparing a Chinese character-based dictionary with the pinyin-based ABC Dictionary illustrates the difference. The Chinese–English Dictionary locates the head character entry lín 林 "① forest; woods; grove ② circles …" as one of 14 characters pronounced lín, and alphabetically lists 17 words with lín as the first syllable, for instance, línchǎnpǐn 林产品 "forest products", línhǎi 林海 "immense forest", and línyè 林業 "forestry". The ABC Dictionary includes lín 林 "① forest; woods; grove ② forestry…" as one of 6 characters pronounced lín, followed by alphabetically listed lin-initial headwords from línbā 淋巴 "lymph" to línfēng 臨風 "facing against the wind", but then ling-initial words begin to appear with líng 〇 "zero", and only after another three pages will one find lìngzūn 令尊 "(courteous) your father" followed by línhǎi "immense forest". DeFrancis' ABC Chinese–English Dictionary is aptly described as having "defied the tyranny of Chinese characters". Content The ABC Dictionary includes 5,425 different Chinese characters and a total 71,486 lexical entries. The dictionary's most notable feature is being entirely arranged by pinyin in the alphabetical order of complete compound words. For example, kuàngquán 矿泉 "mineral spring" immediately precedes kuángquǎnbìng 狂犬病 "rabies", which in turn immediately precedes kuàngquánshuǐ 矿泉水 "mineral water", even though the first and last words begin with the same character and the middle word with another. The present dictionary has several titles: ABC Dictionary (half title page) The ABC Chinese–English Dictionary: Alphabetically Based Computerized—with the last three words encircling Chinese calligraphic 电脑拼音编码 [diànnǎo pīnyīn biānmǎ "computer pinyin encoding"] (title page) ABC (Alphabetically Based Computerized) Chinese–English Dictionary (colophon) ABC Chinese–English Dictionary 漢英詞典‧按羅馬字母順序排列 [Hàn-Yīng cídiǎn ‧ àn luómǎ zìmǔ shùnxù páiliè "Chinese–English Dictionary: according to alphabetically sorted romanization"] (front cover). The ABC Chinese–English Dictionary comprises three main sections: an 18-page front matter, the 833-page body matter of alphabetically arranged entries, and 64-page back matter with nine appendices. The front matter includes a Table of contents; Dedication to "China's Staunchest Advocates of Writing Reform"; Editor's Call to Action; Acknowledgments; Introduction with I. Distinctive Features of the Dictionary and II. Selection and Definition of Entries; and User's Guide with I. Arrangement of Entries, II. Orthography, III. Explanatory Notes and Examples, IV. Works Consulted, and V. Abbreviations. The dictionary proper gives alphabetically arranged lexical entries and English translation equivalents, from "a* 啊 [i.e., particle] used as phrase suffix ① (in enumeration) … ② (in direct address and exclamation) … ③ (indicating obviousness/impatience … ④ (for confirmation)" to "zúzūn 族尊 clan seniors". The ABC Dictionary has nine Appendices: I. Basic Rules for Hanyu Pinyin Orthography [promulgated by the State Language Commission in 1988]; II. Historical Chronology [from the Shang Dynasty c. 1700–1045 BC to the Republic of China "1912–1949" and People's Republic of China 1949–]; III. Analytic Summary of Transcription Systems [for Pinyin, Wade-Giles, Gwoyeu Romatzyh, Yale Romanization, and Zhuyin Fuhao]; IV. Wade-Giles/Pinyin Comparative Table; V. PY/WG/GR/YR/ZF Comparative Table; VI. Radical Index of Traditional Characters, Notes on Kangxi Radicals, Kangxi Radical Chart, Kangxi Radical Index; VII. Stroke Order List of Recurrent Partials; VIII. Stroke Order Index of Characters with Obscure Radicals; IX. Radical Index of Simplified Characters, with Notes on Selected CASS Radicals [viz. the Chinese Academy of Social Sciences system of 189 radicals used in dictionaries like the Xinhua Zidian], High Frequency CASS Radicals, Simplified/Traditional Radical Conversion Table, CASS Radical Chart, CASS Radical Index [for users who want to look up a Chinese logograph's pronunciation, listing the 5,425 characters appearing in the dictionary]. DeFrancis' ABC Chinese–English Dictionary claims six lexicographical distinctions. 1. It offers the powerful advantage of arranging entries in single-sort alphabetical order as by far the simplest and fastest way to look up a term whose pronunciation is known. Alone among look-up systems, the ABC Dictionary enables users to find words seen only in transcription or heard but not seen in written form. And, since most dictionary consultation involves characters whose pronunciation is known (not just by native speakers, but also by learners beyond the very beginning level), over time the total saving in time is enormous. (Radical indexes of characters are provided for those cases where the pronunciation of a term is not known.) 2. It has been compiled with the aid of computers and lends itself to further development in electronic as well as printed form. 3. It makes use of the latest PRC lexicographical developments in respect to selection of terms and rules of orthography. 4. It utilizes frequency data from both the PRC and Taiwan to indicate the relative frequency of entries that are complete homographs (identical even as to tones) or partial homographs (identical except for tones) as an aid to student learning and computer inputting. 5. It presents a unique one-to-one correspondence between transcription and characters that permits calling up on computer the desired characters for any entry by simple uninterrupted typing of the corresponding transcription. 6. It introduces an innovative typographical format that enables its 71,486 entries (3,578 single-syllable and 68,908 multi-syllable entries) to be packed into about one-sixth less space than would be required by conventional dictionaries, while still providing greater legibility, in part thanks to larger characters. The result is a handy portable work that contains an unparalleled number of entries for its size. The main source for ABC Dictionary entries is the 1989 edition of Hanyu Pinyin Cihui 汉語拼音詞汇 "Hanyu Pinyin Romanized Lexicon", a semi-official wordlist of 60,400 entries (without definitions) compiled by members of the China State Language Commission. Focusing upon the needs of Western students of Chinese, DeFrancis and the editors eliminated some terms and added others. Their dictionary includes many neologisms such as dàgēdà 大哥大 "cellular phone" or dǎo(r)yé 倒爷 "profiteer; speculator", as well as the modern Chinese practice of incorporating the Latin alphabet in coining Sino-alphabetic words like BP-jī BP机 "pager; beeper". In contrast to most Chinese–English dictionaries, DeFrancis' emphasizes multisyllabic cí "words" rather than monosyllabic zì "characters". It only includes monosyllabic character entries that are likely to be encountered as free forms or unbound morphemes (according to the Xiandai Hanyu Pinlü Cidian 现代汉语频率词典 "Frequency Dictionary of Contemporary Chinese"). Chinese word frequency is an important aspect of the ABC Dictionary, and it lists homophones according to their decreasing occurrence. Frequency orders are based largely on Xiandai Hanyu Pinlü Cidian for monosyllabic entries and Zhongwen Shumianyu Pinlü Cidian 中文书面语频率词典 "Dictionary of the Chinese Written Language" for polysyllabic words. For entries with identical spelling, including tones, arrangement is by order of frequency, indicated by a superscript number before the transcription, a device adapted from Western lexicographic practice to distinguish homonyms. For example, "1dàomù 盜墓 rob graves" and "2dàomù 道木 railway sleeper [tie]". For entries that are homographic if tones are disregarded, the item of highest usage frequency is indicated by an asterisk following the transcription (see a* 啊 above), for instance, "lìguǐ 厉鬼 ferocious ghost" and "lìguì* 立柜 clothes-closet; wardrobe; hanging cupboard". While frequency information is useful for students learning vocabulary, the ABC Dictionary chiefly provides it in order to help determine the default items in computer usage. "Our unique combination of letters, tone marks, and raised numbers provides a simple and distinctive one-to-one correspondence between transcription and character(s) that is intended to facilitate computerized handling of the entries." The ABC Dictionary format for entries is: the pinyin spelling of the word in large boldface type the corresponding simplified Chinese characters, and for single-character entries with a contrasting traditional Chinese character, it is given in square brackets given upon the first appearance of each character/morpheme (e.g., "wà 袜[襪] socks; stockings; hose") parts of speech in boldface small caps (e.g., verb phrase, onomatopoeia), which is especially useful for Western students of Chinese (e.g., "huǎnghū 恍惚 ① absentminded ② dimly; faintly; seemingly") (optional) usage environments (e.g., TW Taiwan) or registers (vulg. vulgar) in angle brackets and italics, for instance, "húlǔ 胡虏 northern barbarians" translation equivalents in Roman type (e.g., "huàngdang 晃荡 rock; shake; sway"); semicolons separate slightly variant meanings of entries, and circled numbers distinguish more widely different meanings (as in huǎnghū above) (optional) example phrases and sentences in semi-bold italicized pinyin, but without characters, which users can find through alphabetic lookup, followed by English renderings in Roman type (e.g., "1tóutòng 头痛 have a headache Zhè shì zhēn ràng rén ~! This gives one a real headache!") Take the dao in Daoism for an example dictionary entry. 2dào 道 ① road ② channel ③ way ④ doctrine ⑤ Daoism ⑥ line ♦ for rivers/topics/etc. ♦ ① say; speak; talk chángyán dào as the saying goes Tā shuō ~: "..." He said: "..." ② think; suppose (1996: 113) This concise entry uses a superscript on dào to denote 道 as the second most commonly occurring unbound character pronounced dào, gives six English translation equivalents, distinguishes syntactic uses as a measure word and a verb, and gives two characterless usage examples chángyán dào 常言道 and Tā shuōdào 他說道. Reception Reviews of DeFrancis' ABC Chinese–English Dictionary were published by major academic journals in linguistics (e.g., The Modern Language Journal), Asian studies (Journal of the Royal Asiatic Society), and sinology (China Review International). Most reviewers criticized certain aspects, such as the difficulty of looking up a traditional Chinese character, but also highly evaluated the innovative dictionary. Here are three representative examples of praise: "the most extraordinary Chinese–English dictionary I have ever had such pleasure to look Chinese words up in and to read their English definitions"; "The thorough scholarship and fresh outlook make it a valuable contribution to Chinese lexicography, while the high production standards and comprehensive coverage of the colloquial language should make it a favourite of all serious students of Modem Chinese"; "This excellent one-volume Chinese–English dictionary is a crowning achievement for John DeFrancis, one of the doyens of Chinese language teaching in the United States". A common area of complaint involves the ABC Dictionarys treatment of traditional and simplified Chinese characters. Dictionary entries give simplified characters for headwords, and only give the traditional form upon the first appearance of each character, and in the appendices. For instance, critics say, "looking up characters in traditional form is a bit more trouble than it might be, you must use a special index"; and the dictionary is "clearly not designed to be used by anyone who does serious work with nonsimplified characters". One reviewer panned the ABC Dictionarys supplementary materials. For instance, saying the front matter's "uncommonly profuse" dedication and Editor's Call to Action reveal "no doubt that axes are being ground" about writing reform; the Distinctive Features of the Dictionary "reads like an abstract for a research grant application"; and describing most of the appendices as "a hodgepodge of pub quiz trivia". Several evaluations of the ABC Chinese–English Dictionary mention cases in which using the alphabetically-arranged headword entries is more efficient than using a conventionally arranged dictionary with character head entries that list words written with that character as the first. Robert S. Bauer, a linguist of Cantonese at Hong Kong Polytechnic University says the dictionary works best when users hear a word pronounced but do not know how to write it in characters, they can very quickly look it up in pinyin order and find the correct characters and meanings. However, to look up an unknown character's pronunciation and meaning, then one needs to use a radical-indexed dictionary. Bauer says "I have generally succeeded in finding almost all the words and expressions I have tried to look up; this I regard as quite remarkable since I cannot say the same about other dictionaries I have been consulting over my more than 25 years of working on Chinese". Sean Jensen says alphabetical collation is "truly iconoclastic" in the tradition-rich world of Chinese lexicography and describes experimenting with using the dictionary. I am used to the "old style" dictionaries based on radicals, and I was disposed to approach the ABC Dictionary with some skepticism. But having used it for two months I have so(sic) say that it is nothing short of wonderful! It is a pleasure to be able to use a Chinese dictionary in the same way that one uses a French or German dictionary. The typography is exceptionally clear, and the sheer quantity of words per page, arranged alphabetically, has the effect of bringing the melodies of spoken Chinese alive. Michael Sawer, professor of Chinese at the University of Canberra, says using the ABC Dictionary does not make it easy to quickly find all the words beginning with the same Chinese character; but it does enable readers to easily find all those pronounced the same (disregarding tonal differences), as well as which among homophonous words is used most frequently. Taking a contrary view, Karen Steffen Chung, professor of Chinese at National Taiwan University, found using DeFrancis' dictionary less satisfying than traditional dictionaries, where all compounds beginning with the same character are listed together under that character head entry. Giving the circular example of a hypothetical dictionary user wanting to find all the compounds beginning with shí 實 "to realize", which is certainly easier with a customary Chinese dictionary than with the ABC, Chung says that the alphabetic arrangement is unfortunately "its biggest drawback", and while "this may reflect an ideal of treating Chinese primarily as a spoken rather than written language, it also goes against native habit and intuition.". Scott McGinnis, professor of Chinese at the University of Maryland, explains that users of the ABC Dictionary who are already familiar with written Chinese and dictionaries organized by character headings must "forget" what they know about the Mandarin syllabic inventory and focus strictly on the spelling. For some dictionary users, purely pinyin-dependent sequencing such as cuānzi 镩子 "ice pick" to cūbào 粗暴 "rude; rough; crude" and nǎngshí 曩时 "(written) in olden days; of yore" to nánguā 南瓜 "pumpkin" "may be at least initially confusing". Jan W. Walls, professor of Chinese language and culture at Simon Fraser University, describes some minor oversights in the dictionary such as the "dīshì 的士 taxi" entry, which might imply the borrowing came directly from English, when it actually is a loanword from Cantonese dik1 si6 的士 transcribing taxi. But this is a minor point, "merely meizhong-buzu 美中不足 (defined in ABC as "blemish in sth. otherwise perfect") that should not detract from the great value of this important work … which is quite likely to become a standard reference work for English-speaking students of Mandarin, and to remain so for quite some time.". The expanded 2003 ABC Dictionary had fewer academic reviews than the 1996 ABC Chinese–English Dictionary. Michael Sawer, who also reviewed the original ABC Dictionary, calls this comprehensive dictionary an "outstanding contribution to the field, in many ways better than other comparable dictionaries". He makes comparisons between the ABC Chinese–English Comprehensive Dictionary (called "ABC") and two bilingual dictionaries aimed more at native speakers of Chinese who are learning English: the Han-Ying Da Cidian ("CED" Chinese–English Dictionary), and Xiandai Hanyu Cidian: Han-Ying Shuangyu ("CCD" Contemporary Chinese Dictionary). The 2003 ABC, like the 1996 first edition, is arranged in strict single-sort alphabetical order, while both the CED and CCD are in double-sort alphabetic order—that is, the first syllables of each word are arranged in alphabetic order, and then within each tone category in order of different characters. Take for instance, dictionary users wanting to look up yìshi 意识 "consciousness; mentality". A user who knows the pronunciation begins with yì can search (in ascending number of strokes) through some eighty characters pronounced yì before finding 意, and then 意识; or a user who knows that yìshi is written 意识 can find 意 in the radical index, under the "heart radical" 心, then the 9 remaining strokes in 音, and find the page number for the 意 head entry. A great advantage of ABC is that you can immediately look up a word you have heard but whose exact tone, meaning, and characters are unknown to you. For yishi in all tonal combinations, this dictionary gives 46 different words, with 意识 easily found. What an ABC user cannot straightforwardly see (which of course they can in the other two dictionaries) is all the words listed beginning with yì 意. ABC gives grammatical information with around 30 tags, including both parts of speech ("V." for verb) and other tags ("ID." for idiom). CED provides just 11 grammatical tags, while CCD only marks numerals and classifiers. The comprehensive grammatical tags in ABC are "a most valuable feature", especially for learners of Chinese, and many words have more than one grammatical function (e.g., xuéxí 学习 [學習] study; learn; emulate ◆ learning"). Another useful ABC grammatical tag is B.F. for "bound form" (as opposed to free form words, mentioned above), but this distinction is not usually indicated in PRC dictionaries such as CED and CCD. Proper nouns receive a particularly ample treatment in ABC (especially compared with CED and CCD). These include people's names (e.g., some 15 beginning with the surname Lǐ 李, ranging from the famous poet Li Bai to the former PRC President Li Xiannian), names of automobiles (like Xiàlì 夏利 "Charade"), and many toponyms (Xiàwēiyí 夏威夷 "Hawaii", spelled with the ʻokina). Yanfang Tang, professor of Chinese at the College of William and Mary, says the ABC Chinese–English Comprehensive Dictionary "stands above its peers as one of the most comprehensive, informative, and useful tools in the study of and dealing with the Chinese language". This dictionary is a valuable asset for several types of users. Native speakers will particularly benefit from the "authentic and accurate English translations" of Chinese words and phrases, which are an improvement over sometimes "stiff and awkward" translations in previous Chinese–English dictionaries, which were edited predominantly by Anglophone native speakers of Chinese. Chinese–English translators, students of Chinese as a foreign language, and compilers of Chinese language textbooks will find this dictionary indispensable for providing comprehensive linguistic information. Nonnative speakers of Chinese will find the dictionary handy to use, and those who are accustomed to alphabets will find locating a Chinese word in this alphabetically arranged dictionary "almost an act of second nature". Advanced learners of Chinese who have a firm command of pinyin will also benefit, especially in cases when they know how to pronounce a word but do not remember how to write it, and will be able to quickly find the character. Tang suggests an improvement for future ABC Dictionary editions. Users who want to look up an unfamiliar character may find the layout of Stroke-Order Index and Radical Index to be "awkward, inconvenient, and time-consuming" because after looking the character up, the index gives the pinyin pronunciation instead of the page. Editions ABC Chinese–English Dictionary: Pocket edition (1999). ABC Chinese–English Dictionary: Desk reference edition (2000). References Footnotes External links ABC Chinese Dictionary Series, News from University of Hawaiʻi Press ABC dictionaries, Wenlin Chinese dictionaries
418544
https://en.wikipedia.org/wiki/Screen%20reader
Screen reader
A screen reader is a form of assistive technology (AT) that renders text and image content as speech or braille output. Screen readers are essential to people who are blind, and are useful to people who are visually impaired, illiterate, or have a learning disability. Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech, sound icons, or a braille device. They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various operating system features (like inter-process communication and querying user interface properties), and employing hooking techniques. Microsoft Windows operating systems have included the Microsoft Narrator screen reader since Windows 2000, though separate products such as Freedom Scientific's commercially available JAWS screen reader and ZoomText screen magnifier and the free and open source screen reader NVDA by NV Access are more popular for that operating system. Apple Inc.'s macOS, iOS, and tvOS include VoiceOver as a built-in screen reader, while Google's Android provides the Talkback screen reader and its Chrome OS can use ChromeVox. Similarly, Android-based devices from Amazon provide the VoiceView screen reader. There are also free and open source screen readers for Linux and Unix-like systems, such as Speakup and Orca. Types Command-line (text) In early operating systems, such as MS-DOS, which employed command-line interfaces (CLIs), the screen display consisted of characters mapping directly to a screen buffer in memory and a cursor position. Input was by keyboard. All this information could therefore be obtained from the system either by hooking the flow of information around the system and reading the screen buffer or by using a standard hardware output socket and communicating the results to the user. In the 1980s, the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham developed a Screen Reader for the BBC Micro and NEC Portable. Graphical Off-screen models With the arrival of graphical user interfaces (GUIs), the situation became more complicated. A GUI has characters and graphics drawn on the screen at particular positions, and therefore there is no purely textual representation of the graphical contents of the display. Screen readers were therefore forced to employ new low-level techniques, gathering messages from the operating system and using these to build up an "off-screen model", a representation of the display in which the required text content is stored. For example, the operating system might send messages to draw a command button and its caption. These messages are intercepted and used to construct the off-screen model. The user can switch between controls (such as buttons) available on the screen and the captions and control contents will be read aloud and/or shown on a refreshable braille display. Screen readers can also communicate information on menus, controls, and other visual constructs to permit blind users to interact with these constructs. However, maintaining an off-screen model is a significant technical challenge; hooking the low-level messages and maintaining an accurate model are both difficult tasks. Accessibility APIs Operating system and application designers have attempted to address these problems by providing ways for screen readers to access the display contents without having to maintain an off-screen model. These involve the provision of alternative and accessible representations of what is being displayed on the screen accessed through an API. Existing APIs include: Android Accessibility Framework Apple Accessibility API AT-SPI IAccessible2 Microsoft Active Accessibility (MSAA) Microsoft UI Automation Java Access Bridge Screen readers can query the operating system or application for what is currently being displayed and receive updates when the display changes. For example, a screen reader can be told that the current focus is on a button and the button caption to be communicated to the user. This approach is considerably easier for the developers of screen readers, but fails when applications do not comply with the accessibility API: for example, Microsoft Word does not comply with the MSAA API, so screen readers must still maintain an off-screen model for Word or find another way to access its contents. One approach is to use available operating system messages and application object models to supplement accessibility APIs. Screen readers can be assumed to be able to access all display content that is not intrinsically inaccessible. Web browsers, word processors, icons and windows and email programs are just some of the applications used successfully by screen reader users. However, according to some users, using a screen reader is considerably more difficult than using a GUI, and many applications have specific problems resulting from the nature of the application (e.g. animations) or failure to comply with accessibility standards for the platform (e.g. Microsoft Word and Active Accessibility). Self-voicing programs and applications Some programs and applications have voicing technology built in alongside their primary functionality. These programs are termed self-voicing and can be a form of assistive technology if they are designed to remove the need to use a screen reader. Cloud-based Some telephone services allow users to interact with the internet remotely. For example, TeleTender can read web pages over the phone and does not require special programs or devices on the user side. Virtual assistants can sometimes read out written documents (textual web content, PDF documents, e-mails etc.) The best-known examples are Apple's Siri, Google Assistant, and Amazon Alexa. Web-based A relatively new development in the field is web-based applications like Spoken-Web that act as web portals, managing content like news updates, weather, science and business articles for visually-impaired or blind computer users. Other examples are ReadSpeaker or BrowseAloud that add text-to-speech functionality to web content. The primary audience for such applications is those who have difficulty reading because of learning disabilities or language barriers. Although functionality remains limited compared to equivalent desktop applications, the major benefit is to increase the accessibility of said websites when viewed on public machines where users do not have permission to install custom software, giving people greater "freedom to roam". This functionality depends on the quality of the software but also on a logical structure of the text. Use of headings, punctuation, presence of alternate attributes for images, etc. is crucial for a good vocalization. Also a web site may have a nice look because of the use of appropriate two dimensional positioning with CSS but its standard linearization, for example, by suppressing any CSS and Javascript in the browser may not be comprehensible. Customization Most screen readers allow the user to select whether most punctuation is announced or silently ignored. Some screen readers can be tailored to a particular application through scripting. One advantage of scripting is that it allows customizations to be shared among users, increasing accessibility for all. JAWS enjoys an active script-sharing community, for example. Verbosity Verbosity is a feature of screen reading software that supports vision-impaired computer users. Speech verbosity controls enable users to choose how much speech feedback they wish to hear. Specifically, verbosity settings allow users to construct a mental model of web pages displayed on their computer screen. Based on verbosity settings, a screen-reading program informs users of certain formatting changes, such as when a frame or table begins and ends, where graphics have been inserted into the text, or when a list appears in the document. Language Some screen readers can read text in more than one language, provided that the language of the material is encoded in its metadata. Some screen reading programs also include language verbosity, which automatically detects verbosity settings related to speech output language. For example, if a user navigated to a website based in the United Kingdom, the text would be read with an English accent. See also List of screen readers Screen magnifier Self-voicing Speech processing Speech recognition Speech synthesis Vinux VoiceOver References External links Fangs screen reader emulatoran open-source Mozilla Firefox extension that simulates how a web page would look in JAWS Assistive technology
1002038
https://en.wikipedia.org/wiki/Robert%20Taylor%20%28computer%20scientist%29
Robert Taylor (computer scientist)
Robert William Taylor (February 10, 1932 – April 13, 2017), known as Bob Taylor, was an American Internet pioneer, who led teams that made major contributions to the personal computer, and other related technologies. He was director of ARPA's Information Processing Techniques Office from 1965 through 1969, founder and later manager of Xerox PARC's Computer Science Laboratory from 1970 through 1983, and founder and manager of Digital Equipment Corporation's Systems Research Center until 1996. Uniquely, Taylor had no formal academic training or research experience in computer science; Severo Ornstein likened Taylor to a "concert pianist without fingers," a perception reaffirmed by historian Leslie Berlin: "Taylor could hear a faint melody in the distance, but he could not play it himself. He knew whether to move up or down the scale to approximate the sound, he could recognize when a note was wrong, but he needed someone else to make the music." His awards include the National Medal of Technology and Innovation and the Draper Prize. Taylor was known for his high-level vision: "The Internet is not about technology; it's about communication. The Internet connects people who have shared interests, ideas and needs, regardless of geography." Early life Robert W. Taylor was born in Dallas, Texas, in 1932. His adoptive father, Rev. Raymond Taylor, was a Methodist minister who held degrees from Southern Methodist University, the University of Texas at Austin and Yale Divinity School. The family (including Taylor's adoptive mother, Audrey) was highly itinerant during Taylor's childhood, moving from parish to parish. Having skipped several grades as a result of his enrollment in an experimental school, he began his higher education at Southern Methodist University at the age of 16 in 1948; while there, he was "not a serious student" but "had a good time." Taylor then served a stint in the United States Naval Reserve during the Korean War (1952–1954) at Naval Air Station Dallas before returning to his studies at the University of Texas at Austin under the GI Bill. At UT he was a "professional student," taking courses for pleasure. In 1957, he earned an undergraduate degree in experimental psychology from the institution with minors in mathematics, philosophy, English and religion. He subsequently earned a master's degree in psychology from Texas in 1959 before electing not to pursue a PhD in the field. Reflecting his background in experimental psychology and mathematics, he completed research in neuroscience, psychoacoustics and the auditory nervous system as a graduate student. According to Taylor, "I had a teaching assistantship in the department, and they were urging me to get a PhD, but to get a PhD in psychology in those days, maybe still today, you have to qualify and take courses in abnormal psychology, social psychology, clinical psychology, child psychology, none of which I was interested in. Those are all sort of in the softer regions of psychology. They're not very scientific, they're not very rigorous. I was interested in physiological psychology, in psychoacoustics or the portion of psychology which deals with science, the nervous system, things that are more like applied physics and biology, really, than they are what normally people think of when they think of psychology. So I didn't want to waste time taking courses in those other areas and so I said I'm not going to get a PhD." After leaving Texas, Taylor taught math and coached basketball for a year at Howey Academy, a co-ed prep school in Florida. "I had a wonderful time but was very poor, with a second child — who turned out to be twins — on the way," he recalled. Taylor took engineering jobs with aircraft companies at better salaries. He helped to design the MGM-31 Pershing as a senior systems engineer for defense contractor Martin Marietta (1960–1961) in Orlando, Florida. In 1962, after submitting a research proposal for a flight control simulation display, he was invited to join NASA's Office of Advanced Research and Technology as a program manager assigned to the manned flight control and display division. Computer career Taylor worked for NASA in Washington, D.C. while the Kennedy administration was backing research and development projects such as the Apollo program for a manned moon landing. In late 1962 Taylor met J. C. R. Licklider, who was heading the new Information Processing Techniques Office (IPTO) of the Advanced Research Project Agency (ARPA) of the United States Department of Defense. Like Taylor, Licklider had specialized in psychoacoustics during his graduate studies. In March 1960, he published "Man-Computer Symbiosis," an article that envisioned new ways to use computers. This work was an influential roadmap in the history of the internet and the personal computer, and greatly influenced Taylor. During this period, Taylor also became acquainted with Douglas Engelbart at the Stanford Research Institute in Menlo Park, California. He directed NASA funding to Engelbart's studies of computer-display technology at SRI that led to the computer mouse. The public demonstration of a mouse-based user interface was later called "the Mother of All Demos." At the Fall 1968 Joint Computer Conference in San Francisco, Engelbart, Bill English, Jeff Rulifson and the rest of the Human Augmentation Research Center team at SRI showed on a big screen how he could manipulate a computer remotely located in Menlo Park, while sitting on a San Francisco stage, using his mouse. ARPA In 1965, Taylor moved from NASA to IPTO, first as a deputy to Ivan Sutherland (who returned to academia shortly thereafter) to fund large programs in advanced research in computing at major universities and corporate research centers throughout the United States. Among the computer projects that ARPA supported was time-sharing, in which many users could work at terminals to share a single large computer. Users could work interactively instead of using punched cards or punched tape in a batch processing style. Taylor's office in the Pentagon had a terminal connected to time-sharing at Massachusetts Institute of Technology, a terminal connected to the Berkeley Timesharing System at the University of California, Berkeley, and a third terminal to the System Development Corporation in Santa Monica, California. He noticed each system developed a community of users, but was isolated from the other communities. Taylor hoped to build a computer network to connect the ARPA-sponsored projects together, if nothing else, to let him communicate to all of them through one terminal. By June 1966, Taylor had been named director of IPTO; in this capacity, he shepherded the ARPANET project until 1969. Taylor had convinced ARPA director Charles M. Herzfeld to fund a network project earlier in February 1966, and Herzfeld transferred a million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts from MIT Lincoln Laboratory to be its first program manager. Roberts first resisted moving to Washington DC, until Herzfeld reminded the director of Lincoln Laboratory that ARPA dominated its funding. Licklider continued to provide guidance, and Wesley A. Clark suggested the use of a dedicated computer, called the Interface Message Processor at each node of the network instead of centralized control. At the 1967 Symposium on Operating Systems Principles, a member of Donald Davies' team (Roger Scantlebury) presented their research on packet switching and suggested it for use in the ARPANET. ARPA issued a request for quotation (RFQ) to build the system, which was awarded to Bolt, Beranek and Newman (BBN). ATT Bell Labs and IBM Research were invited to join, but were not interested. At a pivotal meeting in 1967 most participants resisted testing the new network; they thought it would slow down their research. In 1968, Licklider and Taylor published "The Computer as a Communication Device". The article laid out the future of what the Internet would eventually become. It began with a prophetic statement: "In a few years, men will be able to communicate more effectively through a machine than face to face." Beginning in 1967, Taylor was sent by ARPA to investigate inconsistent reports coming from the Vietnam War. Only 35 years old, he was given an identification card with the military rank equivalent to his civilian position (brigadier general), thus ensuring protection under the Geneva convention if he were captured. Over the course of several trips to the area, he established a computer center at the Military Assistance Command, Vietnam base in Saigon. In his words: "After that the White House got a single report rather than several. That pleased them; whether the data was any more correct or not, I don't know, but at least it was more consistent." The Vietnam project took him away from directing research, and "by 1969 I knew ARPANET would work. So I wanted to leave." The election of Richard Nixon to the presidency and ongoing tensions with Roberts (who, despite maintaining a putatively cordial relationship with Taylor, resented his lack of research experience and appointment to the IPTO directorship) also factored in his decision to leave ARPA. For about a year, he joined Sutherland and David C. Evans at the University of Utah in Salt Lake City, where he had funded a center for research on computer graphics while at ARPA. Unable to acclimate to the Church of Jesus Christ of Latter-day Saints-dominated milieu, Taylor moved to Palo Alto, California in 1970 to become associate manager of the Computer Science Laboratory (CSL) at Xerox Corporation's new Palo Alto Research Center. Xerox Although Taylor played an integral role in recruiting scientists for the laboratory from the ARPA network, physicist and Xerox PARC director George Pake felt that he was an unsuitable candidate to manage the group because he lacked a relevant doctorate and subsequent experience in academic research. While Taylor eschewed a Pake-proposed research program in computer graphics in favor of largely administering the day-to-day operations of the laboratory from its inception, he acquiesced to the appointment of BBN scientist and ARPA network acquaintance Jerome I. Elkind as titular CSL manager in 1971. Technologies developed at PARC under Taylor's aegis focused on reaching beyond ARPANET to develop what has become the Internet, and the systems that support today's personal computers. They included: Powerful personal computers (including the Xerox Alto and later "D-machines") with windowed displays and graphical user interfaces that inspired the Apple Lisa and Macintosh. The Computer Science Laboratory built the Alto, which was conceived by Butler Lampson and designed mostly by Charles P. Thacker, Edward M. McCreight, Bob Sproull and David Boggs. The Learning Research Group of PARC's Systems Science Laboratory (led by Alan Kay) added the software-based "desktop" metaphor. Ethernet, which networks local computers within a building or campus; and the first Internet, a network that connected the Ethernet to the ARPANET utilizing PUP (PARC Universal Protocol), forerunner to TCP/IP. It was primarily designed by Robert Metcalfe, Boggs, Thacker and Lampson. The electronics and software that led to the laser printer (spearheaded by optical engineer Gary Starkweather, who transferred from Xerox's Webster, New York laboratory to work with CSL) and the Interpress page description language that allowed John Warnock and Chuck Geschke to found Adobe Systems. "What-you-see-is-what-you-get" (WYSIWYG) word-processing programs, as exemplified by Bravo, which Charles Simonyi took to Microsoft to serve as the basis for Microsoft Word. SuperPaint, a pioneering graphics program and framebuffer computer system developed by Richard Shoup. The software was written in consultation with future Pixar co-founder Alvy Ray Smith, who could not secure an appointment at PARC and was retained as an independent contractor. Although Shoup received a special Emmy Award (shared with Xerox) in 1983 and an Academy Scientific Engineering Award (shared with Smith and Thomas Porter) in 1998 for his achievement, program development continued to be marginalized by PARC, ultimately precipitating Shoup's departure in 1979. Belying his lack of programming and engineering experience, Taylor was noted for his strident advocacy of Licklider-inspired distributed personal computing and his ability to maintain collegial and productive relationships between what was widely perceived as the foremost array of the epoch's leading computer scientists. This was exemplified by a weekly staff meeting at PARC (colloquially known as "Dealer" after Edward O. Thorp's Beat the Dealer) in which staff members would lead a discussion about myriad topics. They would sit in a circle of beanbag chairs and open debate was encouraged. According to Kay, the meeting "was part of the larger ARPA community to learn how to argue to illuminate rather than merely to win. ... The main purposes of Dealer -- as invented and implemented by Bob Taylor -- were to deal with how to make things work and make progress without having a formal manager structure. The presentations and argumentation were a small part of a deal session (they did quite bother visiting Xeroids). It was quite rare for anything like a personal attack to happen (because people for the most part came into PARC having been blessed by everyone there -- another Taylor rule -- and already knowing how 'to argue reasonably')." Throughout his tenure at PARC, Taylor frequently clashed with Elkind (who held budgetary responsibility for new projects but found his managerial authority undercut by Taylor's intimate relationships with the research staff) and Pake (who did not countenance Taylor's outsized influence in the laboratory and deprecatory attitude toward Xerox's physics research program, then directly overseen by Pake); as a result, he was not officially invited to the company's "Futures Day" demo (marking the public premiere of the Alto) in Boca Raton, Florida in 1977. However, after one of Elkind's extended absences (stemming from his ongoing involvement in other corporate and government projects), Taylor became the manager of the laboratory in early 1978. In 1983, physicist and integrated circuit specialist William J. Spencer became director of PARC. Spencer and Taylor disagreed about budget allocations for CSL (exemplified by the ongoing institutional divide between computer science and physics) and CSL's frustration with Xerox's inability to recognize and use what they had developed. By the end of the year, Taylor and most of the researchers at CSL left Xerox. A coterie of leading computer scientists (including Licklider, Donald Knuth and Dana Scott) expressed their displeasure with Xerox's decision not to retain Taylor in a letter-writing campaign to CEO David Kearns. DEC SRC Taylor was hired by Ken Olsen of Digital Equipment Corporation, and formed the Systems Research Center in Palo Alto. Many of the former CSL researchers came to work at SRC. Among the projects at SRC were the Modula-3 programming language; the snoopy cache, used in the Firefly multiprocessor workstation; the first multi-threaded Unix system; the first User Interface editor; the AltaVista search engine and a networked Window System. Retirement and death Taylor retired from DEC in 1996. Following his divorce (coinciding with his departure from Xerox), he lived in a secluded house in Woodside, California. In 2000, he voiced two concerns about the future of the Internet: control and access. In his words: There are many worse ways of endangering a larger number of people on the Internet than on the highway. It's possible for people to generate networks that reproduce themselves and are very difficult or impossible to kill off. I want everyone to have the right to use it, but there's got to be some way to insure responsibility.Will it be freely available to everyone? If not, it will be a big disappointment. On April 13, 2017, he died at his home in Woodside, California. His son said he had suffered from Parkinson's disease and other health problems. Awards In 1984, Taylor, Butler Lampson, and Charles P. Thacker received the ACM Software Systems Award "for conceiving and guiding the development of the Xerox Alto System demonstrating that a distributed personal computer system can provide a desirable and practical alternative to time-sharing." In 1994, all three were named ACM Fellows in recognition of the same work. In 1999, Taylor received a National Medal of Technology and Innovation. The citation read "For visionary leadership in the development of modern computing technology, including computer networks, the personal computer and the graphical user interface." In 2004, the National Academy of Engineering awarded him along with Lampson, Thacker and Alan Kay their highest award, the Draper Prize. The citation reads: "for the vision, conception, and development of the first practical networked personal computers." In 2013, the Computer History Museum named him a Museum Fellow, "for his leadership in the development of computer networking, online information and communications systems, and modern personal computing." See also Internet pioneers References Further reading Reprints of early papers with preface by Taylor External links The New Old Boys From the ARPAnet Extract from 'Tools for Thought' by Howard Rheingold 1984 ACM Software Systems Award citation 1994 ACM Fellow citation 2004 Draper Prize citation 1932 births 2017 deaths American computer scientists Digital Equipment Corporation people Xerox people Internet pioneers Scientists at PARC (company) National Medal of Technology recipients Fellows of the Association for Computing Machinery Draper Prize winners University of Texas at Austin alumni United States Navy personnel of the Korean War People from Dallas People from Woodside, California Neurological disease deaths in California Deaths from Parkinson's disease
50346842
https://en.wikipedia.org/wiki/Cambridge%20Animation%20Systems
Cambridge Animation Systems
Cambridge Animation Systems was a British software company that developed a traditional animation software package called Animo, and is now part of Canadian company Toon Boom Technologies. It was based in Cambridge, England, hence the name. Established in 1990, it created the Animo software in 1992 after acquiring Compose in Color, which was developed by Oliver Unter-ecker. Animo was used for several animated feature films, shorts, and television series, and it powered the UK animation industry until the 2000s as it was used by studios like King Rollo Films, Telemagination, and Cosgrove Hall Films, but it was also used by studios in other countries, most notably Warner Bros. Feature Animation, DreamWorks, and Nelvana. In total, Animo was used by over 300 studios worldwide. In 2000, CAS developed Animo Inkworks, a plug-in which allowed Maya and 3ds Max users to export 3D data into Animo and integrate it into 2D animation via the Scene III plug-in. In 2001, they developed another plug-in called Animo Sniffworks, which exports Flash output to Maya. In 2009, CAS was acquired by Toon Boom Technologies and has since folded. See also Toon Boom Technologies, which acquired CAS and its Animo package Toonz, another prolific animation software used by the 2D industry in the 1990s and 2000s USAnimation Computer Animation Production System (CAPS), used by Disney from the 1990s to the mid-2000s Adobe Flash List of 2D animation software References External links 2D animation software Defunct software companies Defunct companies of England Software companies of the United Kingdom Companies based in Cambridge Software companies established in 1990 Software companies disestablished in 2009 British companies established in 1990 1990 establishments in England 2009 disestablishments in England
14233717
https://en.wikipedia.org/wiki/Shackerstone%20railway%20station
Shackerstone railway station
Shackerstone railway station is a preserved railway station and heritage museum in Leicestershire, Central England. It is the terminus and the headquarters of the heritage Battlefield Line Railway, with the Shackerstone Railwayana Museum, tea room, shop, loco shed and main rolling stock located here. The Ashby Canal is nearby. History The original intention was to site the station where it is today, but in response to a request from Lord Howe of Gopsall Hall, the Committee agreed to move it north of the junction and call it "Gopsall"; but soon altered their minds and moved it back to the junction. Land for this purpose was bought from Lord Howe, who in 1877, was allowed to plant trees along the approach road to the station. The station was designed by the Midland Railway company architect John Holloway Sanders. Its position made Shackerstone strategically important in the operation of the line, and it seems to have been selected as the headquarters of the inspector, Manning, in charge of the working of the line. Probably he combined the post with the stationmastership (as was done on the GN-LNWR Joint Line in East Leicestershire at Melton Mowbray) for no stationmaster is named at Shackerstone in the first staff list, and Manning’s pay, 50 shillings per week, was much higher than any other member of the ANJR’s staff. It must also have ranked in the top class of three varieties of station planned by the Committee, for constructional purposes, the estimated cost being £1,300 plus £350 for the stationmaster’s house. The building of Shackerstone Station was undertaken by Messrs. J. & E. Woods of Derby, as part of a contract that also included the stations of Measham, Snarestone, Heather and Hugglescote, for which the contract price was £12,826.15. On this basis the price of Shackerstone should have been about £3,500. One thing remains at present unknown: the name of the architect. As the stations on the ANJR are similar to a few on the Midland system, it is likely that they are the work of a member of the Midland Railway’s staff, as there is no reference in the minutes to payments to any outside architect in this connection. The station became a grade II listed building in 1989. The Sheds The loco shed is signposted from Platform 1 and is only a short walk from the Station through the original goods yard. Access to parts of the shed and workshops are restricted for reasons of safety. The shed is made up of various sections of local NCB buildings and even part of a Nuneaton cinema. The shed plays host to many different locomotives and is sectioned into two key areas. The main and central area is the "running shed". This features easy access to both the workshop and stores and includes an inside locomotive inspection pit. The 2nd area, which features 2 roads at the south end of the shed, is used many for storage of long-term projects. References External links Battlefield Line Railway Heritage railway stations in Leicestershire Museums in Leicestershire Railway museums in England Railway stations in Great Britain opened in 1873 Railway stations in Great Britain closed in 1931 Former London and North Western Railway stations Former Midland Railway stations Grade II listed railway stations Grade II listed buildings in Leicestershire John Holloway Sanders railway stations
1357358
https://en.wikipedia.org/wiki/In%20R%20Voice
In R Voice
In'R'Voice (birth name Den Kozlov) began his music career in Moscow, Russia. In the late 1980s, he was a fan of the classic industrial bands like Skinny Puppy, Front 242 and Nitzer Ebb. Driven by the innovative sound of industrial music, Kozlov began to experiment with analogue synthesizers. In 1992, he recorded his first tracks on a Studio “Tandem”, which recorded mainly pop musicians, but had very enthusiastic sound engineers. Together they developed the sound of Kozlov's project Inner Resonance Voice. In 1994, In’R’Voice became hugely popular on Moscow's main radio station Maximum and in nightclubs. A year later he received an award for innovations in music from the Ministry of Culture of Russia. In 1996, Kozlov visited London, and attained early trance music events held by Transient Records (Otherworld Party) and by the label Return To The Source in a Fridge Club and the party label “Pagan” by Tsuoshi Suzuki (Matsury Productions). He recorded collaboration tracks with Tim Healey and Seb Taylor. Kozlov helped to organise concerts in Moscow for the leading trance projects including Shakta, Slide, Quirk, DJ's Baraka, John Phantasm, Mike McGuire (Juno Reactor), and Chris Organic. In 1999, Kozlov relocated to London, and played alongside James Monro, Blue Planet Corporation, Tim Schuldt, Infected Mushroom, Bumbling Loons, Shakta, Hux Flux and many more. The first full album on Optica Records In’R’Voice, Resonance Metaphizix, was mastered at the Abbey Road Studios in London. After 2004, Kozlov began to experiment with other styles of music, and recorded “Digital Shamanism” on London's Optica Records, “The Scent of Russian Dreams” on Sphere Records and “Do You Sea What I See” on System Recordings in the US. In 2008, Kozlov founded his own multigenre EDM record label, Kissthesound Records (kissthesound.com), and in 2013 Tech-House label Axiomatic Records (axiomatic-records.com), aiming to release young talented musicians from Russia and Europe producing different trends of music. Constantly widening his collection of his records, Kozlov has released his music under many project names such as Den Kozlov, Peace Data, Decay Axiomatic, Emotion Code, Karmahacker, Technokitsch, Dive Craft, Shagging Harmonies, T.E.C.H.O., Record Needle Injection, X-Television, Pixelliadians, Psy-Phi Generation, Babnick Enemy, Krolex, S.H.L.I., X-Alt Project, Slake Philter, Overtone Epidemic, Slick Tweak, Levitating Cat, Love In Decay, Eastern Promises, See You Later Oscillator, and Moot. Discography Digital albums IN’R’VOICE – Telekinesis (Electro-Breaks Version) (The Urban Sound Records 2005 / UK) IN’R’VOICE – Bittersweet (Kissthesound Records 2009 / UK) IN’R’VOICE – Magnetic (Kissthesound Records 2010 / UK) IN’R’VOICE - Futurepast (Kissthesound Records 2012 / UK) IN’R’VOICE - Magnetic Future | B-Sides, Singles, Hits 1992-2012 (Kissthesound Records 2012 / UK) IN’R’VOICE – Disentangle (Kissthesound Records 2012 / UK) IN’R’VOICE - 1995 Infinity remixes (Kissthesound Records 2013 / UK) IN’R’VOICE - Reanitrance (Kissthesound Records 2014 / UK) PEACE DATA - Infected Washroom (Arkona Creation Records 2010 / UK) PEACE DATA - TranceSceneDental (Sun Station Records 2012 / RU) PEACE DATA - Some More Dark (Kissthesound Records 2013 / UK) KARMAHACKER – Share My Wings (The Urban Sound Records 2005 / UK) KARMAHACKER – Heartbreaker (Kissthesound Records 2010 / UK) EMOTION CODE – Mesmerise The Future (The Urban Sound Records 2005 / UK) EMOTION CODE - Triggers Of Imagination (The Urban Sound Records 2005 / UK) EMOTION CODE – Triggers Of Imagination (Kissthesound Records 2008 / UK) EMOTION CODE – Out Of The Blue (Kissthesound Records 2009 / UK) DEN KOZLOV – Inxinema (Kissthesound Records 2008 / UK) DEN KOZLOV – Digital Shamanism Extended Digital Edition (Kissthesound Records 2010 / UK) DEN KOZLOV – Galactic Pulse Music (Kissthesound Records 2010 / UK) DEN KOZLOV – Selected Digital Tapes | The Best Of Den Kozlov Downtempo Edition (Kissthesound Records 2012 / UK) DEN KOZLOV – Selected Digital Tapes | The Best Of Den Kozlov Breaks Edition (Kissthesound Records 2012 / UK) DEN KOZLOV – Selected Digital Tapes | The Best Of Den Kozlov House Edition (Kissthesound Records 2012 / UK) DEN KOZLOV ft S.Gavrilov - Do You Sea What I See (System Recordings 2010 / USA) DEN KOZLOV – Infrangible Duality (System Recordings 2011 / USA) DEN KOZLOV – Inexplicable Premonitions (System Recordings 2013 / USA) PSY-PHI GENERATION – Impact (Kissthesound Records / Cytopia 2007 / Holland) PIXELLIADIANS – Vortex (Kissthesound Records 2008 / UK) PIXELLIADIANS – All In One (Kissthesound Records 2011 / UK) T.E.C.H.O. – Oxitocin (Zenzontle Records 2011 / Canada) SEE YOU LATER OSCILLATOR – Amphigory (Kissthesound Records 2012 / UK) CD albums I* N’R’VOICE – Resonance Metaphizix 2CD (Sphere Records 2001 / UK) IN’R’VOICE – Inner Vision (re-issue) (Pink Room 1996 + Shum Records 2003 / RU) IN’R’VOICE – Outer Space (Shum Records 2004 / RU) IN’R’VOICE – Telekinesis (Shum Records 2005 / RU) IN’R’VOICE – The Scent Of Russian Dreams (Optica Records 2006 / UK) PEACE DATA – Peace Depth (Shum Records 2004 / RU) PEACE DATA – Infected Washroom (Arkona Creation 2010 / UK) DEN KOZLOV – Digital Shamanism 2CD (Optica Records 2004 / UK) DEN KOZLOV – Digital Shamanism 2CD (Shum Records 2004 / RU) KARMAHACKER – Bending The Reality (Shum Records 2005 / RU) S.H.L.I. – Shli Online (Shum Records 2006 / RU) EMOTION CODE – Infused With The Spiral (Shum Records 2005 / RU) Digital compilations IN’R’VOICE vs AKADO – Oxymoron (Uber Trend Colour Psychedelic Purple / Kissthesound 2009) IN’R’VOICE - Human Recycle (Other Worlds vol.1 Downtempo / Kissthesound Records 2013 / UK) IN’R’VOICE - Deep Reach (Other Worlds vol.3 No Beat / Kissthesound Records 2014 / UK) RECORD NEEDLE INJECTION - Outstructured (Other Worlds vol.2 Broken Beat / Kissthesound Records 2014 / UK) DEN KOZLOV - The Universe Is A Spiral (Other Worlds vol.3 No Beat / Kissthesound Records 2014 / UK) DEN KOZLOV - On Hold (ZR 5th Anniversary / Zenzontle Records 2013 / Canada) DEN KOZLOV - Forest (Chill For A Winter Morning / System Recordings 2010 / USA) DEN KOZLOV ft S.Gavrilov - Lost In You (Celebrating 10 Years of Breaks / System Recordings 2011 / USA) DEN KOZLOV ft S.Gavrilov - Lost In You (These Are The Breaks / System Recordings 2010 / USA) DEN KOZLOV ft S.Gavrilov - We Change Forever (Rocktronica / System Recordings 2010 / USA) DEN KOZLOV ft S.Gavrilov - Confession (Chill For A Late Night / System Recordings 2010 / USA) DEN KOZLOV ft S.Gavrilov - Confession (Celebrating 10 Years Of Chill / System Recordings 2010 / USA) DENIS ALEXANDER – The Perfect Date (Bakkelit 2.1 / Spiral Trax Records 2008/ Germany) DEYA DOVA – Spaceman (Peace Data Remix) (Deya Dova Remixed / Reflekta Records 2011/ Australia) PEACE DATA – Nasty Things (We Play Trance Vol.1 / Paranoja Records 2009) PEACE DATA – Nasty Things (Sound Of Psy Trance Goa / Volt9 Records 2009) PEACE DATA – Kat Niet Eten Hersenen (9 Lives / Sangoma Records 2012 / Germany) PEACE DATA and E.C.T. – P-Funk P-Monk (Brain Screw Vol.2 / Parvati Records 2012 / Denmark) PEACE DATA and E.C.T. – What The Flac (Footprints Vol.2 / Parvati Records 2013 / Denmark) PEACE DATA - Magnet (Other Worlds vol.2 Broken Beat / Kissthesound Records 2014 / UK) EMOTION CODE – Push It Closer (Drum’N’Bass Vol.1 / Quebola Records 2010) EMOTION CODE – Push It Closer (UK Drum’N’Bass and Breakbeat Vol.2 / Quebola Records 2010) EMOTION CODE – Creating Out Of Chaos (Break The Beat Vol.1 / Vinyl Loops Records 2011) EMOTION CODE – Push It Closer (Break The Beat Vol.2 / Vinyl Loops Records 2011) EMOTION CODE – Creating Out Of Chaos (UK Drum’N’Bass and Breakbeat Vol.1 / Quebola Records 2011) EMOTION CODE - Seventeenth Shade Of Tone (Other Worlds vol.2 Broken Beat / Kissthesound Records 2014 / UK) S.H.L.I. – Solar Mission (Uber Trend Colour Psychedelic Purple / Kissthesound 2009) SLAKE PHILTER – L.B.F. (Uber Trend Electric Indigo / Kissthesound 2010) OVERTONE EPIDEMIC – Overkill (Glamour Underground Vol.1 / Kissthesound 2009) SLICK TWEAK – Stepping Up (Glamour Underground Vol.1 / Kissthesound 2009) KROLEX – Entering Yellowness (Glamour Underground Vol.1 / Kissthesound 2009) EASTERN PROMISES – Eastern Kiss (Glamour Underground Vol.3 / Kissthesound 2010) EMOTION CODE & HELSKANKI – Rush Hour (Kissthesound Of Electronica Vol.1 / Kissthesound 2010) DEN KOZLOV ft. INNESSA – Searching For Something (Glamour Underground Vol.4 / Kissthesound 2011) DECAY AXIOMATIC – X-Ray (Glamour Underground Vol.5 / Kissthesound 2012) VIRTUAL MODE – Alone (Den Kozlov Remix) (Kissthesound Of Electronica Vol.2 / Kissthesound 2012) DENNY DE KAY - Sophia (Nimbus Formation Remix) (Other Worlds vol.2 Broken Beat / Kissthesound Records 2014 / UK) CD compilations IN’R’VOICE – Ne Voules Vous Pas... (Object#1 / R.M.G. 1997 / RU) IN’R’VOICE AND SHAKTA – Present Moment (Distance to Goa 9 / Distance Records 2000 / EU) IN’R’VOICE ft. LOA – Temple-Works (Orbis / Sphere Records 2000 / UK) IN’R’VOICE – Resonance (Atom Smasher / Optica Records 2000 / UK) IN’R’VOICE – Breakthrough (Chapel Perlios / Kukomi Records 2000 / UK) IN’R’VOICE vs DARK SOHO – Depth Of Emotion (Orbis II / Sphere Records 2001 / UK) IN’R’VOICE – Space Boot (Psychoaneasis / Sphere Records 2001 / UK) IN’R’VOICE vs MUMIY TROLL – Shesasinger (Electronic Children / Bamba Records 2000 / Germany) IN’R’VOICE vs MUMIY TROLL – Sheisasinger (Моя Певица | Другие Места / Real Records 2000 / RU) IN’R’VOICE – Esoteric 013 (Trance Psychedelic Flashbacks 5 / Rumour Records 2002 / UK) IN’R’VOICE and SHAKTA (Feed The Flame by Shakta / Dragonfly Records 2004 / UK) IN’R’VOICE – Killaherts (Goa Vol15 / Yellow Sunshine Explosion 2006 / UK) KARMAHACKER – Meditation (Trance Destiny / Passion Music 2002 / UK) KARMAHACKER – I Know U Somewhere (Trance Psychedelic Flashbacks 5 / Rumour Records 2002 / UK) KARMAHACKER – Shaman's Trip To Outland (Chill@The Global Cafe / Rumour Records 2002 / UK) PEACE DATA – Peace Depth (Eskimo / Resonoise Records 2000 / UK) PEACE DATA – Manda(la) (Natural Selection Vol.2 / Shum Records 2004 / RU) PEACE DATA – Kat Niet Eten Hersenen (9 Lives / Sangoma Records 2012 / Germany) PEACE DATA and E.C.T. – P-Funk P-Monk (Brain Screw Vol.2 / Parvati Records 2012 / Denmark) PEACE DATA and E.C.T. – What The Flac (Footprints Vol.2 / Parvati Records 2013 / Denmark) S.H.L.I. (IN’R’VOICE & LOA) – I Am The Master (Atom Smasher / Optica Records 2000 / UK) Digital EPs IN’R’VOICE ft. SEONE – Bass From Outer Space E.P. (Kissthesound / Cytopia 2008 / Holland) IN’R’VOICE – The Time Shifter E.P. (Inevitable Records 2008 / UK) IN’R’VOICE – Magnetic Remixes (Kissthesound Records 2010 / UK) IN’R’VOICE – Sea Is My Blood Remixes (Kissthesound Records 2010 / UK) PEACE DATA – Paper Happiness E.P. (Kissthesound Records 2010 / UK) PEACE DATA – Peace Of Paper (Kissthesound Records 2012 / UK) DEN KOZLOV – Guarding Your Sleep Remixes (Kissthesound Records 2010 / UK) DEN KOZLOV – Emotions (Kissthesound Records 2011 / UK) DENNY DE CAY & SAMANTHA FARRELL - Love And Decay (Axiomatic Records 2013 / RU) TECHNOKITCSH - Body Language (Axiomatic Records 2013 / RU) T.E.C.H.O. – Love Candy E.P. (Kissthesound Records 2011 / UK) T.E.C.H.O. – Sophia E.P. (Kissthesound Records 2012 / UK) DIVE CRAFT - Deeper House (Axiomatic Records 2013 / RU) DIVE CRAFT - Telepathy (Axiomatic Records 2013 / RU) SHAGGING HARMONIES – Blades E.P. (Kissthesound Records / UK) X-TELEVISION – Kiss Ya Later E.P. (Kissthesound Records 2008 / UK) S.H.L.I. – I Am The Master E.P. (Kissthesound / Cytopia 2008 / Holland) LEVITATING CAT – Flying Single E.P. (Kissthesound Records 2009 / UK) LEVITATING CAT - Sand Stomp (Axiomatic Records 2013 / RU) RECORD NEEDLE INJECTION – Mirror Saw E.P. (Kissthesound Records 2010 / UK) LOVE AND DECAY ft.INNESSA – Dancefloor Is My Playground (Kissthesound Records 2011 / UK) KARMAHACKER – Back To Nature Wasted E.P. (Kissthesound Records 2012 / UK) MOOT – Unununium E.P. (Kissthesound Records 2013 / UK) DECAY AXIOMATIC – Amazonka Closed Her Eyes E.P. (Kissthesound Records 2012 / UK) DECAY AXIOMATIC – Hydrogender E.P. (Kissthesound Records 2013 / UK) DECAY AXIOMATIC - Sublime (Axiomatic Records 2013 / RU) Compact cassettes INNER VOICE - Morpheus (Pink Room 1992 / RU) |not official INNER VOICE - Everything Made Of Plastic (Pink Room 1992 / RU)|not official INNER VOICE - Trinitrotoluol (Pink Room 1993 / RU) |not official IN’R’VOICE - In The Middle Of Nowhere (Pink Room 1995 / RU) | not official IN’R’VOICE – Inner Vision (Pink Room 1996 / RU) | not official IN’R’VOICE – Reason Nation (Pink Room 1997 / RU) | not official IN’R’VOICE – Jai Jayati Jai (Compilation First Vision / Sun Trance 1998 / RU) Vinyl records IN’R’VOICE - Crying Universe (Fractal – Kin Records 1999 / UK) IN’R’VOICE AND SHAKTA – Present Moment (Return To The Source 2000 / UK) IN’R’VOICE – Space Boot (Psychoaneasis E.P. / Sphere Records 2001 / UK) References External links Kissthesound Records website Axiomatic Records website Russian trance musicians Musicians from Moscow Year of birth missing (living people) Place of birth missing (living people) Living people Russian techno musicians
60222591
https://en.wikipedia.org/wiki/Android%2010
Android 10
Android 10 (codenamed Android Q during development) is the tenth major release and the 17th version of the Android mobile operating system. It was first released as a developer preview on March 13, 2019, and was released publicly on September 3, 2019. Android 10 was officially released on September 3, 2019, for supported Google Pixel devices, as well as the third-party Essential Phone and Redmi K20 Pro in selected markets. The OnePlus 7T was the first device with Android 10 pre-installed. In October 2019, it was reported that Google's certification requirements for Google Mobile Services will only allow Android 10-based builds to be approved after January 31, 2020. , 27.71% of Android devices run Android 10 (API 29), making it the second most used version of Android. History Google released the first beta of Android 10 under the preliminary name "Android Q" on March 13, 2019, exclusively on their Pixel phones, including the first-generation Pixel and Pixel XL devices where support was extended due to popular demand. Having been guaranteed updates only up to October 2018, the first-generation Pixel and Pixel XL devices received version updates to Android 10. The Pixel 2 and Pixel 2 XL were included, after being granted an extended warranty period which guaranteed Android version updates for them for at least 3 years from when they were first available on the Google Store. A total of six beta or release-candidate versions were released before the final release. The beta program was expanded with the release of Beta 3 on May 7, 2019, being made available on 14 partner devices from 11 OEMs; twice as many devices compared to Android Pie's beta. Beta access was removed from the Huawei Mate 20 Pro on May 21, 2019, due to U.S. government sanctions, but was later restored on May 31. Google released Beta 4 on June 5, 2019, with the finalized Android Q APIs and SDK (API Level 29). Dynamic System Updates (DSU) were also included in Beta 4. The Dynamic System Update allows Android Q devices to temporarily install a Generic System Image (GSI) to try a newer version of Android on top of their current Android version. Once users decide to end testing the chosen GSI image, they can simply reboot their device and boot back into their normal device's Android version. Google released Beta 5 on July 10, 2019, with the final API 29 SDK as well as the latest optimizations and bug fixes. Google released Beta 6, the final release candidate for testing, on August 7, 2019. On August 22, 2019, it was announced that Android Q would be branded solely as "Android 10", with no codename. Google ended the practice of giving major releases titles based on desserts, arguing that this was not inclusive to international users (due either to the aforementioned foods not being internationally known, or being difficult to pronounce in some languages). Android VP of engineering Dave Burke did reveal during a podcast that, in addition, most desserts beginning with the letter Q were exotic, and that he personally would have chosen queen cake. He also noted that there were references to "qt"—an abbreviation of quince tart—within internal files and build systems relating to the release. The statue for the release is likewise the numeral 10, with the Android robot logo (which, as part of an accompanying rebranding, has also been changed to only consist of a head) resting inside the numeral "0". Features Navigation Android 10 introduces a revamped full-screen gesture navigation system and new app open and close animations, with gestures such as swiping from either side edge of the display to go back, swiping up to go to the home screen, swiping up and holding to access Overview, swiping diagonally from a bottom corner of the screen to activate the Google Assistant, and swiping along the gesture bar at the bottom of the screen to switch apps. The use of an edge swiping gesture as a "Back" command was noted as potentially causing conflicts with apps that utilize sidebar menus and other functions accessible by swiping. An API can be used by apps to opt out of handling a back gesture within specific areas of the screen, a sensitivity control was added for adjusting the size of the target area to activate the gesture, and Google later stated that the drawer widget would support being "peeked" by long-pressing near the edge of the screen, and then swiped open. The traditional three-key navigation system used since Android "Honeycomb" remains supported as an option, along with the two-button "pill" style navigation introduced in Android 9.0 Pie. Per Google certification requirements, OEMs are required to support Android 10's default gestures and three-key navigation. OEMs are free to add their own gestures alongside them. However, they must not be enabled by default, they must be listed in a separate area one level deeper than other navigation settings, and they cannot be promoted using notifications. The two-key gesture navigation system used on Android Pie is deprecated, and may not be included on devices that ship with Android 10. However, it can still be included as an option for continuity purposes on devices upgraded from Pie. User experience Android 10 includes a system-level dark mode. Third-party apps can automatically engage a dark mode when it is active. Apps can also present "settings panels" for specific settings (such as, for example, internet connection and Wi-Fi settings if an app requires internet) via overlay panels, so that the user does not have to be taken outside of the app in order to configure them. Privacy and security Several major security and privacy changes are present in Android 10: apps can be restricted by users to only having access to location data when they are actively being used in the foreground. There are also new restrictions on the launching of activities by background apps. For security (due to its use by clickjacking malware) and performance reasons, Android 10 Go Edition forbids use of overlays, except for apps that received the permission before a device was upgraded to Android 10. Encryption In February 2019, Google unveiled Adiantum, an encryption cipher designed primarily for use on devices that do not have hardware-accelerated support for the Advanced Encryption Standard (AES), such as low-end devices. Google stated that this cipher was five times faster than AES-256-XTS on an ARM Cortex-A7 CPU. Therefore, device encryption is now mandatory on all Android 10 devices, regardless of specifications, using Adiantum if their CPU is not capable of hardware-accelerated AES. In addition, implementation of "file-based encryption" (first introduced in Android Nougat) is also mandatory for all devices. On devices shipping with Android 10, security patches for selected system components (such as ANGLE, Conscrypt, media frameworks, networking components, and others) may be serviced via Google Play Store, without requiring a complete system update ("Project Mainline"). In order to license Google mobile services, manufacturers must support these updates for specific modules, while the remainder are marked as "recommended" but optional. Selected modules within this system use the new APEX package format, a variation of APK files designed for housing and servicing low-level system components. Scoped storage A major change to storage access permissions known as "Scoped storage" is supported on Android 10, and will become mandatory for all apps beginning with Android 11. Apps are only allowed to access files in external storage that they had created themselves (preferably contained within an app-specific directory), and audio, image, and video files contained within the Music, Pictures, or Videos directories. Any other file may only be accessed via user intervention through the backwards-incompatible Google Storage Access Frameworks. Apps must have a new "read privileged phone state" permission in order to read non-resettable device identifiers, such as IMEI number. Transport Layer Security TLS 1.3 support is also enabled by default. Platform Platform optimizations have been made for foldable smartphones, including app continuity when changing modes, changes to multi-window mode to allow all apps to run simultaneously (rather than only the actively-used app running, and all others being considered "paused"), and additional support for multiple displays. "Direct Share" has been succeeded by "sharing shortcuts". As before, it allows apps to return lists of direct targets for sharing (such as a combination of an app and a specific contact) for use within share menus. Unlike Direct Share, apps publish their targets in advance and do not have to be polled at runtime, improving performance. Native support has been added for MIDI controllers, the AV1 video codec, the Opus audio codec, and HDR10+. There is also a new standard API for retrieving depth information from camera photos, which can be used for more advanced effects. Native support for aptX Adaptive, LHDC, LLAC, CELT and AAC LATM codecs was added as well. Android 10 supports WPA3 encryption protocol and Enhanced Open, which introduce opportunistic encryption for Wi-Fi. Android 10 adds support for Dual-SIM dual-standby (DSDS), but is initially only available on the Pixel 3a and Pixel 3a XL. Android 10 Go Edition has performance improvements, with Google stating that apps would launch 10% quicker than on Pie. RISC-V Support Recently, Android 10 has been ported to the RISC-V architecture by Chinese-owned T-Head Semiconductor. T-Head Semiconductor managed to get Android 10 running on a triple-core, 64-bit, RISC-V CPU of their own design. See also Android version history References External links Android (operating system) 2019 software
29068112
https://en.wikipedia.org/wiki/PackageForge
PackageForge
PackageForge is a commercial graphical installation and packaging software tool for Symbian OS based smartphones. PackageForge allows developers to graphically create software installation packages that can be installed to a Symbian OS based phone. After installation a user can start using the installed software application. PackageForge works by providing a graphical interface towards the Symbian package definition files (.pkg). The developer provides information about the package, like the vendor, package name, version and what application files to include. After the package has been defined, the package is compiled and built into a Symbian installation file (.sis) which is then ready to be uploaded to the Nokia OVI store or for direct installation on a phone. The SIS installation files are used for installing Flash Lite, Python for S60, Symbian C/C++ or Qt for Symbian applications. Most Notable Features Wizards for creating different package types Compatibility with makesis/signsis and Carbide.c++ development environment Localization support for multilingual packages One-click build and sign of a package Graphical management of software and device dependencies User-friendly build log See also .sis Symbian OS List of installation software References External links Official homepage Tools for creating Symbian installation packages Symbian software installation & packaging documentation Symbian OS
3104
https://en.wikipedia.org/wiki/Amiga%201000
Amiga 1000
The Commodore Amiga 1000, also known as the A1000 and originally marketed as the Amiga, is the first personal computer released by Commodore International in the Amiga line. It combines the 16/32-bit Motorola 68000 CPU which was powerful by 1985 standards with one of the most advanced graphics and sound systems in its class, and runs a preemptive multitasking operating system that fits into of read-only memory and shipped with 256 KB of RAM. The primary memory can be expanded internally with a manufacturer-supplied 256 KB module for a total of 512 KB of RAM. Using the external slot the primary memory can be expanded up to Design The A1000 has a number of characteristics that distinguish it from later Amiga models: It is the only model to feature the short-lived Amiga check-mark logo on its case, the majority of the case is elevated slightly to give a storage area for the keyboard when not in use (a "keyboard garage"), and the inside of the case is engraved with the signatures of the Amiga designers (similar to the Macintosh); including Jay Miner and the paw print of his dog Mitchy. The A1000's case was designed by Howard Stolz. As Senior Industrial Designer at Commodore, Stolz was the mechanical lead and primary interface with Sanyo in Japan, the contract manufacturer for the A1000 casing. The Amiga 1000 was manufactured in two variations: One uses the NTSC television standard and the other uses the PAL television standard. The NTSC variant was the initial model manufactured and sold in North America. The later PAL model was manufactured in Germany and sold in countries using the PAL television standard. The first NTSC systems lack the EHB video mode which is present in all later Amiga models. Because AmigaOS was rather buggy at the time of the A1000's release, the OS was not placed in ROM then. Instead, the A1000 includes a daughterboard with 256 KB of RAM, dubbed the "writable control store" (WCS), into which the core of the operating system is loaded from floppy disk (this portion of the operating system is known as the "Kickstart"). The WCS is write-protected after loading, and system resets do not require a reload of the WCS. In Europe, the WCS was often referred to as WOM (Write Once Memory), a play on the more conventional term "ROM" (read-only memory). Technical information The preproduction Amiga (which was codenamed "Velvet") released to developers in early 1985 contained of RAM with an option to expand it to Commodore later increased the system memory to due to objections by the Amiga development team. The names of the custom chips were different; Denise and Paula were called Daphne and Portia respectively. The casing of the preproduction Amiga was almost identical to the production version: the main difference being an embossed Commodore logo in the top left corner. It did not have the developer signatures. The Amiga 1000 has a Motorola 68000 CPU running at 7.15909 MHz (on NTSC systems) or 7.09379 MHz (PAL systems), precisely double the video color carrier frequency for NTSC or 1.6 times the color carrier frequency for PAL. The system clock timings are derived from the video frequency, which simplifies glue logic and allows the Amiga 1000 to make do with a single crystal. In keeping with its video game heritage, the chipset was designed to synchronize CPU memory access and chipset DMA so the hardware runs in real time without wait-state delays. Though most units were sold with an analog RGB monitor, the A1000 also has a built-in composite video output which allows the computer to be connected directly to some monitors other than their standard RGB monitor. The A1000 also has a "TV MOD" output, into which an RF Modulator can be plugged, allowing connection to a TV that was old enough not to even have a composite video input. The original 68000 CPU can be directly replaced with a Motorola 68010, which can execute instructions slightly faster than the 68000 but also introduces a small degree of software incompatibility. Third-party CPU upgrades, which mostly fit in the CPU socket, use faster successors 68020/68881 or 68030/68882 microprocessors and integrated memory. Such upgrades often have the option to revert to 68000 mode for full compatibility. Some boards have a socket to seat the original 68000, whereas the 68030 cards typically come with an on-board 68000. The original Amiga 1000 is the only model to have 256 KB of Amiga Chip RAM, which can be expanded to 512 KB with the addition of a daughterboard under a cover in the center front of the machine. RAM may also be upgraded via official and third-party upgrades, with a practical upper limit of about 9 MB of "fast RAM" due to the 68000's 24-bit address bus. This memory is accessible only by the CPU permitting faster code execution as DMA cycles are not shared with the chipset. The Amiga 1000 features an 86-pin expansion port (electrically identical to the later Amiga 500 expansion port, though the A500's connector is inverted). This port is used by third-party expansions such as memory upgrades and SCSI adapters. These resources are handled by the Amiga Autoconfig standard. Other expansion options are available including a bus expander which provides two Zorro-II slots. Specifications Retail Introduced on July 23, 1985, during a star-studded gala featuring Andy Warhol and Debbie Harry held at the Vivian Beaumont Theater at Lincoln Center in New York City, machines began shipping in September with a base configuration of 256 KB of RAM at the retail price of . A analog RGB monitor was available for around , bringing the price of a complete Amiga system to US$1,595 (). Before the release of the Amiga 500 and Amiga 2000 models in 1987, the A1000 was marketed as simply the Amiga, although the model number was there from the beginning, as the original box indicates. In the US, the A1000 was marketed as The Amiga from Commodore, with the Commodore logo omitted from the case. The Commodore branding was retained for the international versions. Additionally, the Amiga 1000 was sold exclusively in computer stores in the US rather than the various non computer-dedicated department and toy stores through which the VIC-20 and Commodore 64 were retailed. These measures were an effort to avoid Commodore's "toy-store" computer image created during the Tramiel era. Along with the operating system, the machine came bundled with a version of AmigaBASIC developed by Microsoft and a speech synthesis library developed by Softvoice, Inc. Aftermarket upgrades Many A1000 owners remained attached to their machines long after newer models rendered the units technically obsolete, and it attracted numerous aftermarket upgrades. Many CPU upgrades that plugged into the Motorola 68000 socket functioned in the A1000. Additionally, a line of products called the Rejuvenator series allowed the use of newer chipsets in the A1000, and an Australian-designed replacement A1000 motherboard called The Phoenix utilized the same chipset as the A3000 and added an A2000-compatible video slot and on-board SCSI controller. Impact In 1994, as Commodore filed for bankruptcy, Byte magazine called the Amiga 1000 "the first multimedia computer... so far ahead of its time that almost nobody—including Commodore's marketing department—could fully articulate what it was all about". In 2006, PC World rated the Amiga 1000 as the 7th greatest PC of all time. In 2007, it was rated by the same magazine as the 37th best tech product of all time. Joe Pillow "Joe Pillow" was the name given on the ticket for the extra airline seat purchased to hold the first Amiga prototype while on the way to the January 1984 Consumer Electronics Show. The airlines required a name for the airline ticket and Joe Pillow was born. The engineers (RJ Mical and Dale Luck) who flew with the Amiga prototype (codenamed Lorraine) drew a happy face on the front of the pillowcase and even added a tie. Joe Pillow extended his fifteen minutes of fame when the Amiga went to production. All fifty-three Amiga team members who worked on the project signed the Amiga case. This included Joe Pillow and Jay Miner's dog Michy who each got to "sign" the case in their own unique way. See also Amiga models and variants Amiga Sidecarfor using MS-DOS with Intel 8088 @ 4.77 MHz with 256 KB RAM References External links The Commodore Amiga A1000 at OLD-COMPUTERS.COM Who was Joe Pillow? Amiga computers Computer-related introductions in 1985
1740187
https://en.wikipedia.org/wiki/Go-Back-N%20ARQ
Go-Back-N ARQ
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which the sending process continues to send a number of frames specified by a window size even without receiving an acknowledgement (ACK) packet from the receiver. It is a special case of the general sliding window protocol with the transmit window size of and receive window size of 1. It can transmit frames to the peer before requiring an ACK. The receiver process keeps track of the sequence number of the next frame it expects to receive. It will discard any frame that does not have the exact sequence number it expects (either a duplicate frame it already acknowledged, or an out-of-order frame it expects to receive later) and will send an ACK for the last correct in-order frame. Once the sender has sent all of the frames in its window, it will detect that all of the frames since the first lost frame are outstanding, and will go back to the sequence number of the last ACK it received from the receiver process and fill its window starting with that frame and continue the process over again. Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike waiting for an acknowledgement for each packet, the connection is still being utilized as packets are being sent. In other words, during the time that would otherwise be spent waiting, more packets are being sent. However, this method also results in sending frames multiple times – if any frame was lost or damaged, or the ACK acknowledging them was lost or damaged, then that frame and all following frames in the send window (even if they were received without error) will be re-sent. To avoid this, Selective Repeat ARQ can be used. Pseudocode These examples assume an infinite number of sequence and request numbers. N := window size Rn := request number Sn := sequence number Sb := sequence base Sm := sequence max function receiver is Rn := 0 Do the following forever: if the packet received = Rn and the packet is error free then Accept the packet and send it to a higher layer Rn := Rn + 1 else Refuse packet Send a Request for Rn function sender is Sb := 0 Sm := N + 1 Repeat the following steps forever: if you receive a request number where Rn > Sb then Sm := (Sm − Sb) + Rn Sb := Rn if no packet is in transmission then Transmit a packet where Sb ≤ Sn ≤ Sm. Packets are transmitted in order. Choosing a window size (N) There are a few things to keep in mind when choosing a value for : The sender must not transmit too fast. should be bounded by the receiver’s ability to process packets. must be smaller than the number of sequence numbers (if they are numbered from zero to ) to verify transmission in cases of any packet (any data or ACK packet) being dropped. Given the bounds presented in (1) and (2), choose to be the largest number possible. References External links Go-Back-N ARQ demonstration in a Java applet Logical link control Error detection and correction de:ARQ-Protokoll#Go-Back-N it:Automatic repeat request#Go-Back-N
9792435
https://en.wikipedia.org/wiki/Vector-based%20graphical%20user%20interface
Vector-based graphical user interface
A vector-based graphical user interface is a mostly conceptual type of graphical user interface where elements are drawn using vector rather than raster information. Pros and cons The benefits of a completely vector-based graphical user interface would include: more efficient, independent scalability; The resolution (measured in dots per inch or DPI) could be set higher or lower than 1px:1px without causing pixelation, enabling better use of high resolution monitors. Cons might include: Difficulty integrating raster-based applications. With some effort, this could be accomplished by texturing the entire raster-based application to a vector-based plane (though the disadvantages of raster-based graphics would still stand). Slower rendering, greater system requirements. Because today's monitors display only raster-based information, the vector information would have to be rasterized (and optionally anti-aliased) before appearing. Usage in 3D graphical user interfaces Since current 3D Graphics are usually vector-based, rather than raster-based, vector-based graphical user interfaces would be suitable for 3D graphical user interfaces. This is because raster-based 3D models take up an enormous amount of memory, as they are stored and displayed using voxels. Current operating systems such as Windows Vista, Mac OS X, and UNIX-based operating systems (including Linux) have enjoyed much benefit from using 3D graphical user interfaces. In Windows Vista, for example, Flip3D textures each window to a 3D plane based on vector graphics. Even though the window itself is still raster-based, the plane onto which it is textured is vector-based. As a result, the windows, when rotated, appear flat. In Linux desktops, Compiz Fusion can texture each raster-based workspace onto a 3D vector-based cube. As operating systems evolve, eventually the entire window would be made from 3D vector graphics, so that when rotated, it does not appear "flat". Also, advanced lighting may make 3D graphical user interfaces more aesthetically pleasing. Usage in 2D graphical user interfaces As most computer monitors become more and more high resolution, everything displayed would be smaller. However, if the screen resolution were turned down, everything would appear pixelated. Thus, resolution independence is currently being designed to solve this problem. With raster graphics, all icons need to be extremely high resolution, so as to not appear pixelated on higher resolution screens. This may take up enormous amounts of memory, and hard disk space. If vector graphics were used instead, it could be easily scalable and never lose data nor appear pixelated. Some Graphical User Interfaces on Operating Systems such as IRIX use vector-based icons. A number of vector-based icon sets are also available for window managers such as GNOME and KDE. With Windows, applications built using Windows Presentation Foundation (which is native to Windows Vista, but can be downloaded for Windows XP and Server 2003) are vector-based and scale losslessly based on Windows DPI settings. However, even without this, it has always been possible to build applications to be DPI-aware. Additionally, in Vista, the Desktop Window Manager detects when an app is not DPI aware and, if the computer is set to a different DPI than normal, uses bitmap scaling to render the window at a larger size. New version of AmigaOS 4.1 enhanced in 2008 its Workbench with 2D vector graphical interface based on Cairo libraries, but pragmatically integrated it with a 3D Compositing Engine based on Porter-Duff Routines. See also Sun Microsystems' NeWS (Network extensible Window System) DPI Resolution independence References Computer graphics
18950885
https://en.wikipedia.org/wiki/BBC%20Micro
BBC Micro
The British Broadcasting Corporation Microcomputer System, or BBC Micro, is a series of microcomputers and associated peripherals designed and built by Acorn Computers in the 1980s for the BBC Computer Literacy Project. Designed with an emphasis on education, it was notable for its ruggedness, expandability, and the quality of its operating system. An accompanying 1982 television series, The Computer Programme, featuring Chris Serle learning to use the machine, was broadcast on BBC2. After the Literacy Project's call for bids for a computer to accompany the TV programmes and literature, Acorn won the contract with the Proton, a successor of its Atom computer prototyped at short notice. Renamed the BBC Micro, the system was adopted by most schools in the United Kingdom, changing Acorn's fortunes. It was also successful as a home computer in the UK, despite its high cost. Acorn later employed the machine to simulate and develop the ARM architecture. While nine models were eventually produced with the BBC brand, the phrase "BBC Micro" is usually used colloquially to refer to the first six (Model A, B, B+64, B+128, Master 128, and Master Compact); subsequent BBC models are considered part of Acorn's Archimedes series. History During the early 1980s, the BBC started what became known as the BBC Computer Literacy Project. The project was initiated partly in response to an ITV documentary series The Mighty Micro, in which Christopher Evans of the UK's National Physical Laboratory predicted the coming microcomputer revolution and its effect on the economy, industry, and lifestyle of the United Kingdom. The BBC wanted to base its project on a microcomputer capable of performing various tasks which they could then demonstrate in the TV series The Computer Programme. The list of topics included programming, graphics, sound and music, teletext, controlling external hardware, and artificial intelligence. It developed an ambitious specification for a BBC computer, and discussed the project with several companies including Acorn Computers, Sinclair Research, Newbury Laboratories, Tangerine Computer Systems, and Dragon Data. The introduction of a specific microcomputer to a more general computer literacy initiative was a topic of controversy, however, with criticism aimed at the BBC for promoting a specific commercial product and for going beyond the "traditional BBC pattern" of promoting existing information networks of training and education providers. Accusations were even levelled at the Department of Industry for making the BBC "an arm of Government industrial policy" and using the Computer Literacy Project as a way of "funding industry through the back door", obscuring public financial support on behalf of a government that was ostensibly opposed to subsidising industry. The involvement of the BBC in microcomputing also initiated tentative plans by the independent television companies of the ITV network to introduce their own initiative and rival computing system, with a CP/M-based system proposed by Transam Computers under consideration for such an initiative by the Independent Television Companies Association at a late 1983 meeting. The proposed machine would have been priced at £399, matching that of the BBC Model B, and was reported as offering 64 KB of RAM, a disc interface, and serial and parallel interfaces, itself being a "low-cost development" of an existing machine, the Transam Tuscan, which included dual floppy drives and cost £1,700. This proposal was voted down by the ITV companies citing a possible contravention of the companies' obligations under broadcasting regulations prohibiting sponsorship, along with concerns about a conflict of interest with advertisers of computer products. Despite denials of involvement with ITV from Prism Microproducts, the company had already been pursuing a joint venture with Transam on a product rumoured to be under consideration by the broadcasting group. This product, a business system subsequently known as the Wren, had reportedly been positioned as such an "ITV Micro" towards the end of 1983, also to be offered in a home variant with ORACLE teletext reception capabilities. However, not all ITV franchise holders were equally enthusiastic about scheduling programmes related to microcomputing or about pursuing a computer retailing strategy. The Acorn team had already been working on a successor to their existing Atom microcomputer. Known as the Proton, it included better graphics and a faster 2 MHz MOS Technology 6502 central processing unit. The machine was only at the design stage at the time, and the Acorn team, including Steve Furber and Sophie Wilson, had one week to build a working prototype from the sketched designs. The team worked through the night to get a working Proton together to show the BBC. Although BBC expected a computer with the Zilog Z80 CPU and CP/M operating system, not the Proton's 6502 CPU and proprietary operating system, the Proton was the only machine to match the BBC's specification; it also exceeded the specification in nearly every parameter. Based on the Proton prototype the BBC signed a contract with Acorn as early as February 1981; by June the BBC Micro's specifications and pricing were decided. As a concession to the BBC's expectation of "industry standard" compatibility with CP/M, apparently under the direction of John Coll, the Tube interface was incorporated into the design, enabling a Z80 second processor to be added. A new contract between Acorn and BBC Enterprises was agreed in 1984 for another four-year term, with other manufacturers having tendered for the deal. An Acorn representative admitted that the BBC Model B would not be competitive throughout the term of the renewed contract and that a successor would emerge. The OS ROM v1.0 contains the following credits (code here): (C) 1981 Acorn Computers Ltd.Thanks are due to the following contributors to the development of the BBC Computer (among others too numerous to mention):- David Allen,Bob Austin,Ram Banerjee,Paul Bond,Allen Boothroyd,Cambridge,Cleartone,John Coll,John Cox,Andy Cripps,Chris Curry,6502 designers,Jeremy Dion,Tim Dobson,Joe Dunn,Paul Farrell,Ferranti,Steve Furber,Jon Gibbons,Andrew Gordon,Lawrence Hardwick,Dylan Harris,Hermann Hauser,Hitachi,Andy Hopper,ICL,Martin Jackson,Brian Jones,Chris Jordan,David King,David Kitson,Paul Kriwaczek,Computer Laboratory,Peter Miller,Arthur Norman,Glyn Phillips,Mike Prees,John Radcliffe,Wilberforce Road,Peter Robinson,Richard Russell,Kim Spence-Jones,Graham Tebby,Jon Thackray,Chris Turner,Adrian Warner,Roger Wilson,Alan Wright. Additionally, the last bytes of the BASIC ROM (v2 and v4) include the word "Roger", which is a reference to Sophie Wilson, known at the time as Roger. Market impact The machine was released as the BBC Microcomputer on 1 December 1981, although production problems pushed delivery of the majority of the initial run into 1982. Nicknamed "the Beeb", it was popular in the UK, especially in the educational market; about 80% of British schools had a BBC microcomputer. BYTE called the BBC Micro Model B "a no-compromise computer that has many uses beyond self-instruction in computer technology". It called the Tube interface "the most innovative feature" of the computer, and concluded that "although some other British microcomputers offer more features for a given price, none of them surpass the BBC ... in terms of versatility and expansion capability". As with Sinclair Research's ZX Spectrum and Commodore International's Commodore 64, both released the following year, in 1982, demand greatly exceeded supply. For some months, there were long delays before customers received the machines they had ordered. Efforts were made to market the machine in the United States and West Germany. By October 1983, the US operation reported that American schools had placed orders with it totalling . In one deployment in Lowell, Massachusetts valued at $177,000, 138 BBC Micros were installed in eight of the 27 schools in the city, with the computer's networking capabilities, educational credentials, and the availability of software with "high education quality" accompanied by "useful lesson plans and workbooks" all given as reasons for selecting Acorn's machine in preference to the competition from IBM, Apple and Commodore. In October 1984, while preparing a major expansion of its US dealer network, Acorn claimed sales of 85 per cent of the computers in British schools, and delivery of 40,000 machines per month. That December, Acorn stated its intention to become the market leader in US educational computing. The New York Times considered the inclusion of local area networking to be of prime importance to teachers. The operation resulted in advertisements by at least one dealer in Interface Age magazine, but ultimately the attempt failed. The success of the machine in the UK was due largely to its acceptance as an "educational" computer – UK schools used BBC Micros to teach computer literacy, information technology skills. Acorn became more known for its computer than for its other products. Some Commonwealth countries, including India, started their own computer literacy programs around 1987 and used the BBC Micro, a clone of which was produced by Semiconductor Complex Limited and named the SCL Unicorn. Another Indian computer manufacturer, Hope Computers Pvt Ltd, made a BBC Micro clone called the Dolphin. Unlike the original BBC Micro, the Dolphin featured blue function keys. Production agreements were made with both SCL in India and distributor Harry Mazal in Mexico for the assembly of BBC Micro units from kits of parts, leading to full-scale manufacturing, with SCL also planning to fabricate the 6502 CPU under licence from Rockwell. According to reporting from early 1985, "several thousand Beebs a month" were being produced in India. Meanwhile, the eventual production arrangement in Mexico involved local manufacturer Datum, aiming to assemble 2000 units per month by May 1985, with the initial assembly intended to lead to the manufacture of all aspects of the machines apart from Acorn's proprietary ULA components. Such machines were intended for the Mexican and South American markets, potentially also appealing to those south-western states of the US having large Spanish-speaking populations. Ultimately, upon Acorn's withdrawal from the US in 1986, Datum would continue manufacturing at a level of 7000 to 8000 Spanish-language machines per year for the North and South American markets. The initial strategy for the BBC's computer literacy endeavour involved the marketing of the "Acorn Proton-based BBC microcomputer for less than £200". The Model A and the Model B were initially priced at £235 and £335 respectively, but increased almost immediately to £299 and £399 due to higher costs. The price of nearly £400 was roughly £1200 (€1393) in 2011 prices (or around £ today). Acorn anticipated the total sales to be around 12,000 units, but eventually more than 1.5 million BBC Micros were sold. The cost of the BBC Models was high compared to competitors such as the ZX Spectrum and the Commodore 64, and from 1983 on Acorn attempted to counter this by producing a simplified but largely compatible version intended for home use, complementing the use of the BBC Micro in schools: the 32K Acorn Electron. Description Hardware features: Models A and B A key feature of the BBC Micro's design is the high-performance RAM it is equipped with. A common design note in 6502 computers of the era was to run the RAM at twice the clock rate as the CPU. This allows a separate video display controller to access memory while the CPU is busy processing the data just read. In this way, the CPU and graphics driver can share access to RAM through careful timing. This technique is used, for example, on the Apple and the early Commodore models. The BBC machine, however, was designed to run at the faster CPU speed, 2 MHz, double that of these earlier machines. In this case, bus contention is normally an issue, as there is not enough time for the CPU to access the memory during the period when the video hardware is idle. Some machines of the era accept the inherent performance hit, as is the case for the Amstrad CPC, Atari 8-bit family, and to a lesser extent the ZX Spectrum. Others, like the MSX systems, use entirely separate pools of memory for the CPU and video, slowing access between the two. Furber believed that the Acorn design should have a flat memory model and allow the CPU and video system to access the bus without interfering with each other. To do so, the RAM has to allow four million access cycles per second. Hitachi was the only company considering a DRAM that runs at that speed, the HM4816. To equip the prototype machine, the only four 4816s in the country were hand-carried by the Hitachi representative to Acorn. The National Semiconductor 81LS95 multiplexer is needed for the high memory speed. Furber recalled that competitors came to Acorn offering to replace the component with their own, but "none of them worked. And we never knew why. Which of course means we didn't know why the National Semiconductor one did work correctly. And a million and a half BBC Micros later it was still working and I still didn't know why". Another mystery was the 6502's data bus. The prototype BBC Micro exceeded the CPU's specifications, causing it to fail. The designers found that putting a finger on a certain place on the motherboard caused the prototype to work. Acorn put a resistor pack across the data bus, which Furber described as the engineer's finger' and again, we have no idea why it's necessary, and a million and a half machines later it's still working, so nobody asked any questions". The Model A shipped with 16 KB of user RAM, while the Model B had 32 KB. Extra ROMs can be fitted (four on the PCB or sixteen with expansion hardware) and accessed via paged memory. The machines includes three video ports, one with an RF modulator sending out a signal in the UHF band, another sending composite video suitable for connection to computer monitors, and a separate RGB video port. The separate RGB video out socket was an engineering requirement from the BBC to allow the machine to directly output a broadcast quality signal for use within television programming; it is used on episodes of The Computer Programme and Making the Most of the Micro. The computer includes several input/output interfaces: serial and parallel printer ports, an 8-bit general purpose digital I/O port, a port offering four analogue inputs, a light pen input, and an expansion connector (the "1 MHz bus") that enables other hardware to be connected. An Econet network interface and a disk drive interface were available as options. All motherboards have space for the electronic components, but Econet is rarely installed. Additionally, an Acorn proprietary interface named the "Tube" allows a second processor to be added. Three models of second processor were offered by Acorn, based on the 6502, Z80 and 32016 CPUs. The Tube is used for third-party add-ons, including a Z80 board and hard disk drive from Torch that allows the BBC machine to run CP/M programs. Separate pages, each with a codename, are used to control the access to the I/O: The Tube interface allowed Acorn to use BBC Micros with ARM CPUs as software development machines when creating the Acorn Archimedes. This resulted in the ARM development kit for the BBC Micro in 1986, priced at around £4000. From 2006, a kit with an ARM7TDMI CPU running at 64 MHz, with as much as 64 MB of RAM, was released for the BBC Micro and Master, using the Tube interface to upgrade the 8-bit micros into 32-bit RISC machines. Among the software that operated on the Tube are an enhanced version of the Elite video game and a computer-aided design system that requires a second 6502 CPU and a 3-dimensional joystick named a "Bitstik". The Model A and the Model B are built on the same printed circuit board (PCB), and a Model A can be upgraded to a Model B. Users wishing to operate Model B software need to add the extra RAM and the user/printer MOS Technology 6522 VIA (which many games use for timers) and snip a link, a task that can be achieved without soldering. To do a full upgrade with all the external ports requires soldering the connectors to the motherboard. The original machines shipped with "OS 0.1", with later updates advertised in magazines, supplied as a clip-in integrated circuit, with the last official version being "OS 1.2". Variations in the Acorn OS exist as a result of home-made projects and modified machines can still be bought on internet auction sites such as eBay as of 2011. The BBC Model A was phased out of production with the introduction of the Acorn Electron, with chairman Chris Curry stating at the time that Acorn "would no longer promote it" (the Model A). Early BBC Micros use linear power supplies at the insistence of the BBC which, as a broadcaster, was cautious about electromagnetic interference. The supplies were unreliable, and after a few months the BBC allowed switched mode units. An apparent oversight in the manufacturing process resulted in many Model Bs producing a constant buzzing noise from the built-in speaker. This fault can be rectified partly by soldering a resistor across two pads. There are five developments of the main BBC Micro circuit board that addressed various issues through the models production, from 'Issue 1' through to 'Issue 7' with variants 5 and 6 not being released. The 1985 'BBC Microcomputer Service Manual' from Acorn documents the details of the technical changes. Per Watford Electronics comments in their '32K Ram Board Manual': Export models Two export models were developed: one for the US, with Econet and speech hardware as standard; the other for West Germany. The computer was unsuitable for the Australian market because, Furber said, the design failed above . Export models are fitted with radio frequency shielding as required by the respective countries. From June 1983 the name was always spelled out completely – "British Broadcasting Corporation Microcomputer System" – to avoid confusion with Brown, Boveri & Cie in international markets, after warnings from the Swiss multinational not to market the computer with the BBC label in West Germany, thus forcing Acorn to relabel "hundreds of machines" to comply with these demands. US models include the BASIC III ROM chip, modified to accept the American spelling of COLOR, but the height of the graphics display was reduced to 200 scan lines to suit NTSC TVs, severely affecting applications written for British computers. After the failed US marketing campaign the unwanted machines were remanufactured for the British market and sold, resulting in a third 'UK export' variant. Side product In October 1984, the Acorn Business Computer (ABC)/Acorn Cambridge Workstation range of machines was announced, based primarily on BBC hardware. Hardware features B+64 and B+128 In mid-1985, Acorn introduced the Model B+ which increased the total RAM to 64 KB. This had a modest market impact and received a rather unsympathetic reception, with one reviewer's assessment being that the machine was "18 months too late" and that it "must be seen as a stop gap", and others criticising the elevated price of £500 (compared to the £400 of the original Model B) in the face of significantly cheaper competition providing as much or even twice as much memory. The extra RAM in the Model B+ is assigned as two blocks, a block of 20 KB dedicated solely for screen display (so-called shadow RAM) and a block of 12 KB of special sideways RAM. The B+128, introduced towards the end of 1985, comes with an additional 64 KB (4 × 16 KB sideways RAM banks) to give a total RAM of 128 KB. The B+ is incapable of operating some original BBC B programs and games, such as the very popular Castle Quest. A particular problem is the replacement of the Intel 8271 floppy-disk controller with the Western Digital 1770: not only was the new controller mapped to different addresses, it is fundamentally incompatible and the 8271 emulators that existed were necessarily imperfect for all but basic operation. Software that use copy protection techniques involving direct access to the controller do not operate on the new system. Acorn attempted to alleviate this, starting with version 2.20 of the 1770 DFS, via an 8271-backward- compatible Ctrl+Z+Break option. There is also a long-running problem late in the B/B+'s commercial life infamous amongst B+ owners, when Superior Software released Repton Infinity, which did not run on the B+. A series of unsuccessful replacements were issued before one compatible with both was finally released. BBC Master During 1986, Acorn followed up with the BBC Master, which offers memory sizes from 128 KB and many other refinements which improves on the 1981 original. It has essentially the same 6502-based BBC architecture, with many of the upgrades that the original design intentionally makes possible (extra ROM software, extra paged RAM, second processors) now included on the circuit board as internal plug-in modules. Software and expandability The BBC Micro platform amassed a large software base of both games and educational programs for its two main uses as a home and educational computer. Notable examples of each include the original release of Elite and Granny's Garden. Programming languages and some applications were supplied on ROM chips to be installed on the motherboard. These load instantly and leave the RAM free for programs or documents. Although appropriate content was little-supported by television broadcasters, telesoftware could be downloaded via the optional Teletext Adapter and the third-party teletext adaptors that emerged. The built-in operating system, Acorn MOS, provides an extensive API to interface with all standard peripherals, ROM-based software, and the screen. Features specific to some versions of BASIC, like vector graphics, keyboard macros, cursor-based editing, sound queues, and envelopes, are in the MOS ROM and made available to any application. BBC BASIC itself, being in a separate ROM, can be replaced with another language. BASIC, other languages, and utility ROM chips reside in any of four 16 KB paged ROM sockets, with OS support for sixteen sockets via expansion hardware. The five (total) sockets are located partially obscured under the keyboard, with the leftmost socket hard-wired for the OS. The intended purpose for the perforated panel on the left of the keyboard was for a Serial ROM or Speech ROM. The paged ROM system is essentially modular. A language-independent system of star commands, prefixed with an asterisk, provides the ability to select a language (for example *BASIC, *PASCAL), a filing system (*TAPE, *DISC), change settings (*FX, *OPT), or carry out ROM-supplied tasks (*COPY, *BACKUP) from the command line. The MOS recognises certain built-in commands, and polls the paged ROMs in descending order for service otherwise; if none of them claims the command then the OS returns a Bad command error. Suitable rom images (Or EPROM images) could be written and provide functions without requiring RAM for the code itself. Not all ROMs offer star commands (ROMs containing data files, for instance), but any ROM can "hook" into vectors to enhance the system's functionality. Often the ROM is a device driver for mass storage combined with a filing system, starting with Acorn's 1982 Disc Filing System whose API became the de facto standard for floppy-disc access. The Acorn Graphics Extension ROM (GXR) expands the VDU routines to draw geometric shapes, flood fills, and sprites. During 1985 Micro Power designed and marketed a Basic Extension ROM, introducing statements such as WHILE, ENDWHILE, CASE, WHEN, OTHERWISE, and ENDCASE, as well as direct mode commands including VERIFY. Acorn strongly discouraged programmers from directly accessing the system variables and hardware, favouring official system calls. This was ostensibly to make sure programs keep working when migrated to coprocessors that utilise the Tube interface, but it also makes BBC Micro software more portable across the Acorn range. Whereas untrappable PEEKs and POKEs are used by other computers to reach the system elements, programs in either machine code or BBC BASIC instead pass parameters to an operating system routine. In this way the 6502 can translate the request for the local machine or send it across the Tube interface, as direct access is impossible from the coprocessor. Published programs largely conform to the API except for games, which routinely engage with the hardware for greater speed, and require a particular Acorn model. Many schools and universities employed the machines in Econet networks, and so networked multiplayer games were possible. Few became popular, due to the limited number of machines aggregated in one place. A relatively late but well documented example can be found in a dissertation based on a ringed RS-423 interconnect. Peripherals In line with its ethos of expandability Acorn produced its own range of peripherals for the BBC Micro, including: Joysticks Tape recorder Floppy drive interface upgrade Floppy drives (single and double) Econet networking upgrade Econet Bridge Winchester disk system 6502 Second Processor Z80 Second processor (with CP/M and business software suite) 32016 Second processor ARM Evaluation System Teletext adapter Prestel adapter Speech synthesiser Music 500 synthesiser BBC Turtle (robot) BBC Buggy IEEE 488 Interface Various products from other manufacturers competed directly with Acorn's expansions. For example, companies such as Torch Computers and Cambridge Microprocessor Systems offered second processor solutions for the BBC Micro. A large number of third-party suppliers also produced an abundance of add-on hardware, some of the most common being: RGB monitors Printers, plotters Modems BBC BASIC built-in programming language The built-in ROM-resident BBC BASIC programming language interpreter realised the system's educational emphasis and was key to its success; it is the most comprehensive BASIC compared to other contemporary implementations, and runs very efficiently. Advanced programs can be written without resorting to non-structured programming or machine code. Should one want or need to do some assembly programming, BBC BASIC has a built-in assembler that allows a mixture of BASIC and assembler for whatever processor BASIC was operating on. When the BBC Micro was released, many competing home computers used Microsoft BASIC, or variants typically designed to resemble it. Compared to Microsoft BASIC, BBC BASIC features IF...THEN...ELSE, REPEAT...UNTIL, and named procedures and functions, but retains GOTO and GOSUB for compatibility. It also supports high-resolution graphics, four-channel sound, pointer-based memory access (borrowed from BCPL), and rudimentary macro assembly. Long variable names are accepted and distinguished completely, not just by the first two characters. Other languages Acorn had made a point of not just supporting BBC Basic but also supporting a number of contemporary languages, some of which were supplied as ROM chips to fit the spare sideways ROM sockets on the motherboard. Other languages were supplied on tape or disk. Programming Languages from Acornsoft included the following: ISO Pascal (2× 16 KB ROM + floppy disk) S-Pascal (disk or tape) BCPL (ROM plus further optional disk based modules) Forth (16 KB ROM) LISP (disk, tape or ROM) Logo (2× 16 KB ROM) Turtle Graphics (disk or tape) Micro-PROLOG (16 KB ROM) COMAL (16 KB ROM) As the Z80 Second Processor supported running CP/M, languages available for CP/M were supportable via this route. Successor machines Acorn produced their own 32-bit Reduced Instruction Set Computing (RISC) CPU during 1985, the ARM1. Furber composed a reference model of the processor on the BBC Micro with 808 lines of BASIC, and Arm Ltd. retains copies of the code for intellectual property purposes. The first prototype ARM platforms, the ARM Evaluation System and the A500 workstation, functioned as second processors attached to the BBC Micro's Tube interface. Acorn staff developed the A500's operating system in situ through the Tube until, one by one, the on-board I/O ports were enabled and the A500 ran as a stand-alone computer. With an upgraded processor this was eventually released during 1987 as four models in the Archimedes series, the lower-specified two models (512 KB and 1 MB) continuing the BBC Microcomputer brand with the distinctive red function keys. Although the Archimedes ultimately was not a major success, the ARM family of processors has become the dominant processor architecture in mobile embedded consumer devices, particularly mobile telephones. Acorn's last BBC-related model, the BBC A3000, was released in 1989. It was essentially a 1 MB Archimedes back in a single case form factor. Retro computing scene Furber said in 2015 that he was amazed that the BBC Micro "established this reputation for being reliable, because lots of it was finger-in-the-air engineering". As of 2018, thanks to its ready expandability and I/O functions, there are still numbers of BBC Micros in use, and a retrocomputing community of dedicated users finding new tasks for the old hardware. They still survive in a few interactive displays in museums across the United Kingdom, and the Jodrell Bank observatory was reported using a BBC Micro to steer its 42 ft radio telescope in 2004. Furber said that although "the [engineering] margins on the Beeb were very, very small", when he asked BBC owners at a retrocomputing meeting what components had failed after 30 years, they said "you have to replace the capacitors in the power supply but everything else still works". The Archimedes came with 65Arthur, an emulator which BYTE stated "lets many programs for the BBC Micro run"; other emulators exist for many operating systems. In March 2008, the creators of the BBC Micro met at the Science Museum in London. There was to be an exhibition about the computer and its legacy during 2009. The UK National Museum of Computing at Bletchley Park uses BBC Micros as part of a scheme to educate school children about computer programming. In March 2012, the BBC and Acorn teams responsible for the BBC Micro and Computer Literacy Project met for a 30th anniversary party, entitled "Beeb@30". This was held at Arm's offices in Cambridge and was co-hosted by the Centre for Computing History. Continued development and support Long after the "venerable old Beeb" was superseded, additional hardware and software has been developed. Such developments have included Sprow's 1999 zip compression utility and a ROM Y2K bugfix for the BBC Master. There are also a number of websites still supporting both hardware and software development for the BBC Micros and Acorn in general. Specifications (Model A to Model B+128) Display modes Like the IBM PC with the contemporary Color Graphics Adapter, the video output of the BBC Micro could be switched by software between a number of display modes. These varied between 20 and 40-column text suitable for a domestic TV and 80-column text best viewed with a high-quality RGB-connected monitor; the latter mode was often too blurred to view when using a domestic TV via the UHF output. The variety of modes offered applications a flexible compromise between colour depth, resolution and memory economy. In the first models, the OS and applications were left with the RAM left over from the display mode. Mode 7 was a Teletext mode, extremely economical on memory and an original requirement due to the BBC's own use of broadcast teletext (Ceefax). It also made the computer useful as a Prestel terminal. The teletext characters were generated using an SAA5050 chip, for use with monitors and TV sets without a Teletext receiver. Mode 7 used only 1 KB for video RAM by storing each character as its ASCII code, rather than its bitmap image as was needed for the other modes. Modes 0 to 6 could display colours from a logical palette of sixteen: the eight basic colours at the vertices of the RGB colour cube and eight flashing colours made by alternating the basic colour with its inverse. The palette could be freely reprogrammed without touching display memory. Modes 3 and 6 were special text-only modes that used less RAM by reducing the number of text rows and inserting blank scan lines below each row. Mode 6 was the smallest, allocating 8 KB as video memory. Modes 0 to 6 could show diacritics and other user defined characters. All modes except mode 7 supported bitmapped graphics, but graphics commands such as DRAW and PLOT had no effect in the text-only modes. The BBC B+ and the later Master provided 'shadow modes', where the 1–20 KB frame buffer was stored in an alternative RAM bank, freeing the main memory for user programs. This feature was requested by setting bit 7 of the mode variable, i.e. by requesting modes 128–135. Optional extras A speech synthesis upgrade based on the Texas Instruments TMS5220 featured sampled words spoken by BBC newscaster Kenneth Kendall. This speech system was standard on the US model where it had an American vocabulary. The Computer Concepts Speech ROM also made use of the TMS5220 speech processor but not the speech ROMs, instead driving the speech processor directly. The speech upgrade sold poorly and was largely superseded by Superior Software's software-based synthesiser using the standard sound hardware. The speech upgrade also added two empty sockets next to the keyboard, intended for 16 KB serial ROM cartridges containing either extra speech phoneme data beyond that held in the speech paged ROM or general software accessed through the ROM Filing System. The original plan was that some games would be released on cartridges, but due to the limited sales of the speech upgrade combined with economic and other viability concerns, little or no software was ever produced for these sockets. The cut-out space next to the keyboard (nicknamed the "ashtray") was more commonly used to install other upgrades, such as a ZIF socket for conventional paged ROMs. Use in the entertainment industry The BBC Domesday Project, a pioneering multimedia experiment, was based on a modified version of the BBC Micro's successor, the BBC Master. Musician Vince Clarke of the British synth pop bands Depeche Mode, Yazoo, and Erasure used a BBC Micro (and later a BBC Master) with the UMI music sequencer to compose many hits. In music videos from the 1980s featuring Vince Clarke, a BBC Micro is often present or provides text and graphics such as a clip for Erasure's "Oh L'Amour". The musical group Queen used the UMI Music Sequencer on their record A Kind of Magic. The UMI is also mentioned in the CD booklet. Other bands who have used the Beeb for making music are A-ha and the reggae band Steel Pulse. Paul Ridout is credited as "UMI programmer" on Cars' bassist/vocalist Benjamin Orr's 1986 solo album, The Lace. Other UMI users included Blancmange, Alan Parsons and Mutt Lange. Black Uhuru used the Envelope Generator from SYSTEM software (Sheffield) running on a BBC Micro, to create some of the electro-dub sounds on Try It (Anthem album 1983). The BBC Micro was used extensively to provide graphics and sound effects for many early 1980s BBC TV shows. These included, notably, series 3 and 4 of The Adventure Game; the children's quiz game "First Class" (where the onscreen scoreboard was provided by a BBC Micro nicknamed "Eugene"); and numerous 1980s episodes of Doctor Who including "Castrovalva", "The Five Doctors", and "The Twin Dilemma". Legacy In 2013, NESTA released a report into the legacy of The BBC Micro, looking at the history and impact of the machine and The BBC Computer Literacy project. In June 2018, the BBC released its archives of the Computer Literacy Project. The BBC Micro had a lasting technological impact on the education market by introducing an informal educational standard around the hardware and software technologies employed by the range, particularly the use of BBC BASIC, and by establishing a considerable investment by schools in software for the machine. Consequently, manufacturers of rival systems such as IBM PC compatibles (and almost-compatibles such as the RM Nimbus), the Apple Macintosh, and the Commodore Amiga, as well as Acorn as the manufacturer of the BBC Micro's successor, the Archimedes, were compelled to provide a degree of compatibility with the large number of machines already deployed in schools. See also Acorn Electron Acorn Archimedes BBC Computer Literacy Project 2012 BBC Master Raspberry Pi RiscPC Micro Bit – modern successor to the project TV Micro Men – BBC documentary drama Micro Live – BBC television programme Making the Most of the Micro – BBC television programme Magazines BEEBUG – user group magazine (BBC) Acorn User The Micro User (also known as Acorn Computing) NDR computer WDR computer References External links BeebWiki – BBC Micro Wiki Acorn and the BBC Micro: From education to obscurity (archived) The Acorn BBC Micro @ The Centre for Computing History BBC Micro connected to the Internet converting RSS headline feeds from the BBC News site into audio BBC Microcomputers Video of a BBC computer show from 1985 The BBC Microcomputer User Guide Products introduced in 1981 Acorn Computers Computers designed in the United Kingdom 6502-based home computers Home computers Micro Home video game consoles
338835
https://en.wikipedia.org/wiki/Reality%20hacker
Reality hacker
Reality hacker, reality cracker or reality coder may refer to: Reality hacking, any phenomenon that emerges from the nonviolent use of illegal or legally ambiguous digital tools in pursuit of politically, socially, or culturally subversive ends. Reality Hacker, a character class in the game Realmwalkers: Earth Light by Mind's Eye Publishing. Reality Hackers, an earlier name for Mondo 2000 magazine. Reality Coders, a faction of the Virtual Adepts, a secret society of mages whose magick revolves around digital technology, in the Mage: The Ascension role-playing game. See also Life hack Urban Exploration Hacktivism Pervasive game
19996
https://en.wikipedia.org/wiki/MIDI
MIDI
MIDI (; an acronym for Musical Instrument Digital Interface) is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing, and recording music. The specification originates in a paper titled Universal Synthesizer Interface, published by Dave Smith and Chet Wood, then of Sequential Circuits, at the October 1981 Audio Engineering Society conference in New York City. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument. This could be sixteen different digital instruments, for example. MIDI carries event messages; data that specify the instructions for music, including a note's notation, pitch, velocity (which is heard typically as loudness or softness of volume); vibrato; panning to the right or left of stereo; and clock signals (which set tempo). When a musician plays a MIDI instrument, all of the key presses, button presses, knob turns and slider changes are converted into MIDI data. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module (which contains synthesized musical sounds) to generate sounds, which the audience hears produced by a keyboard amplifier. MIDI data can be transferred via MIDI or USB cable, or recorded to a sequencer or digital audio workstation to be edited or played back. A file format that stores and exchanges the data is also defined. Advantages of MIDI include small file size, ease of modification and manipulation and a wide choice of electronic instruments and synthesizer or digitally-sampled sounds. A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument; however, since MIDI records the messages and information about their notes and not the specific sounds, this recording could be changed to many other sounds, ranging from synthesized or sampled guitar or flute to full orchestra. A MIDI recording is not an audio signal, as with a sound recording made with a microphone. Prior to the development of MIDI, electronic musical instruments from different manufacturers could generally not communicate with each other. This meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard (or other controller device) can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer, even if they are made by different manufacturers. MIDI technology was standardized in 1983 by a panel of music industry representatives, and is maintained by the MIDI Manufacturers Association (MMA). All official MIDI standards are jointly developed and published by the MMA in Los Angeles, and the MIDI Committee of the Association of Musical Electronics Industry (AMEI) in Tokyo. In 2016, the MMA established The MIDI Association (TMA) to support a global community of people who work, play, or create with MIDI. History In the early 1980s, there was no standardized means of synchronizing electronic musical instruments manufactured by different companies. Manufacturers had their own proprietary standards to synchronize instruments, such as CV/gate, DIN sync and Digital Control Bus (DCB). Roland founder Ikutaro Kakehashi felt the lack of standardization was limiting the growth of the electronic music industry. In June 1981, he proposed developing a standard to Oberheim Electronics founder Tom Oberheim, who had developed his own proprietary interface, the Oberheim System. Kakehashi felt the Oberheim System was too cumbersome, and spoke to Sequential Circuits president Dave Smith about creating a simpler, cheaper alternative. While Smith discussed the concept with American companies, Kakehashi discussed it with Japanese companies Yamaha, Korg and Kawai. Representatives from all companies met to discuss the idea in October. Initially, only Sequential Circuits and the Japanese companies were interested. Using Roland's DCB as a basis, Smith and Sequential Circuits engineer Chet Wood devised a universal interface to allow communication between equipment from different manufacturers. Smith and Wood proposed this standard in a paper, Universal Synthesizer Interface, at the Audio Engineering Society show in October 1981. The standard was discussed and modified by representatives of Roland, Yamaha, Korg, Kawai, and Sequential Circuits. Kakehashi favored the name Universal Musical Interface (UMI), pronounced you-me, but Smith felt this was "a little corny". However, he liked the use of "instrument" instead of "synthesizer", and proposed the name Musical Instrument Digital Interface (MIDI). Moog Music founder Robert Moog announced MIDI in the October 1982 issue of Keyboard. At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers. The MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work. In 1982, the first instruments were released with MIDI, the Roland Jupiter-6 and the Prophet 600. In 1983, the first MIDI drum machine, the Roland TR-909, and the first MIDI sequencer, the Roland MSQ-700 were released. The first computer to support MIDI, the NEC PC-88 and PC-98, was released in 1982. The MIDI Manufacturers Association (MMA) was formed following a meeting of "all interested companies" at the 1984 Summer NAMM Show in Chicago. The MIDI 1.0 Detailed Specification was published at the MMA's second meeting at the 1985 Summer NAMM show. The standard continued to evolve, adding standardized song files in 1991 (General MIDI) and adapted to new connection standards such as USB and FireWire. In 2016, the MIDI Association was formed to continue overseeing the standard. An initiative to create a 2.0 standard was announced in January 2019. The MIDI 2.0 standard was introduced at the 2020 Winter NAMM show. Impact MIDI's appeal was originally limited to professional musicians and record producers who wanted to use electronic instruments in the production of popular music. The standard allowed different instruments to communicate with each other and with computers, and this spurred a rapid expansion of the sales and production of electronic instruments and music software. This interoperability allowed one device to be controlled from another, which reduced the amount of hardware musicians needed. MIDI's introduction coincided with the dawn of the personal computer era and the introduction of samplers and digital synthesizers. The creative possibilities brought about by MIDI technology are credited for helping revive the music industry in the 1980s. MIDI introduced capabilities that transformed the way many musicians work. MIDI sequencing makes it possible for a user with no notation skills to build complex arrangements. A musical act with as few as one or two members, each operating multiple MIDI-enabled devices, can deliver a performance similar to that of a larger group of musicians. The expense of hiring outside musicians for a project can be reduced or eliminated, and complex productions can be realized on a system as small as a synthesizer with integrated keyboard and sequencer. MIDI also helped establish home recording. By performing preproduction in a home environment, an artist can reduce recording costs by arriving at a recording studio with a partially completed song. Applications Instrument control MIDI was invented so that electronic or digital musical instruments could communicate with each other and so that one instrument can control another. For example, a MIDI-compatible sequencer can trigger beats produced by a drum sound module. Analog synthesizers that have no digital component and were built prior to MIDI's development can be retrofitted with kits that convert MIDI messages into analog control voltages. When a note is played on a MIDI instrument, it generates a digital MIDI message that can be used to trigger a note on another instrument. The capability for remote control allows full-sized instruments to be replaced with smaller sound modules, and allows musicians to combine instruments to achieve a fuller sound, or to create combinations of synthesized instrument sounds, such as acoustic piano and strings. MIDI also enables other instrument parameters (volume, effects, etc.) to be controlled remotely. Synthesizers and samplers contain various tools for shaping an electronic or digital sound. Filters adjust timbre, and envelopes automate the way a sound evolves over time after a note is triggered. The frequency of a filter and the envelope attack (the time it takes for a sound to reach its maximum level), are examples of synthesizer parameters, and can be controlled remotely through MIDI. Effects devices have different parameters, such as delay feedback or reverb time. When a MIDI continuous controller number (CCN) is assigned to one of these parameters, the device responds to any messages it receives that are identified by that number. Controls such as knobs, switches, and pedals can be used to send these messages. A set of adjusted parameters can be saved to a device's internal memory as a patch, and these patches can be remotely selected by MIDI program changes. Composition MIDI events can be sequenced with computer software, or in specialized hardware music workstations. Many digital audio workstations (DAWs) are specifically designed to work with MIDI as an integral component. MIDI piano rolls have been developed in many DAWs so that the recorded MIDI messages can be easily modified. These tools allow composers to audition and edit their work much more quickly and efficiently than did older solutions, such as multitrack recording. Because MIDI is a set of commands that create sound, MIDI sequences can be manipulated in ways that prerecorded audio cannot. It is possible to change the key, instrumentation or tempo of a MIDI arrangement, and to reorder its individual sections. The ability to compose ideas and quickly hear them played back enables composers to experiment. Algorithmic composition programs provide computer-generated performances that can be used as song ideas or accompaniment. Some composers may take advantage of standard, portable set of commands and parameters in MIDI 1.0 and General MIDI (GM) to share musical data files among various electronic instruments. The data composed via the sequenced MIDI recordings can be saved as a standard MIDI file (SMF), digitally distributed, and reproduced by any computer or electronic instrument that also adheres to the same MIDI, GM, and SMF standards. MIDI data files are much smaller than corresponding recorded audio files. Use with computers The personal computer market stabilized at the same time that MIDI appeared, and computers became a viable option for music production. In 1983 computers started to play a role in mainstream music production. In the years immediately after the 1983 ratification of the MIDI specification, MIDI features were adapted to several early computer platforms. NEC's PC-88 and PC-98 began supporting MIDI as early as 1982. The Yamaha CX5M introduced MIDI support and sequencing in an MSX system in 1984. The spread of MIDI on personal computers was largely facilitated by Roland Corporation's MPU-401, released in 1984, as the first MIDI-equipped PC sound card, capable of MIDI sound processing and sequencing. After Roland sold MPU sound chips to other sound card manufacturers, it established a universal standard MIDI-to-PC interface. The widespread adoption of MIDI led to computer-based MIDI software being developed. Soon after, a number of platforms began supporting MIDI, including the Apple II Plus, IIe and Macintosh, Commodore 64 and Amiga, Atari ST, Acorn Archimedes, and PC DOS. The Macintosh was a favorite among musicians in the United States, as it was marketed at a competitive price, and it took several years for PC systems to catch up with its efficiency and graphical interface. The Atari ST was preferred in Europe, where Macintoshes were more expensive. The Atari ST had the advantage of MIDI ports that were built directly into the computer. Most music software in MIDI's first decade was published for either the Apple or the Atari. By the time of Windows 3.0's 1990 release, PCs had caught up in processing power and had acquired a graphical interface and software titles began to see release on multiple platforms. In 2015, Retro Innovations released the first MIDI interface for a Commodore VIC-20, making the computer's four voices available to electronic musicians and retro-computing enthusiasts for the first time. Retro Innovations also makes a MIDI interface cartridge for Tandy Color Computer and Dragon computers. Chiptune musicians also use retro gaming consoles to compose, produce and perform music using MIDI interfaces. Custom interfaces are available for the Famicom, Nintendo Entertainment System (NES), Nintendo Gameboy and Game Boy Advance, Sega Megadrive and Sega Genesis. Computer files Standard files The Standard MIDI File (SMF) is a file format that provides a standardized way for music sequences to be saved, transported, and opened in other systems. The standard was developed and is maintained by the MMA, and usually uses a .mid extension. The compact size of these files led to their widespread use in computers, mobile phone ringtones, webpage authoring and musical greeting cards. These files are intended for universal use and include such information as note values, timing and track names. Lyrics may be included as metadata, and can be displayed by karaoke machines. SMFs are created as an export format of software sequencers or hardware workstations. They organize MIDI messages into one or more parallel tracks and time-stamp the events so that they can be played back in sequence. A header contains the arrangement's track count, tempo and an indicator of which of three SMF formats the file uses. A type 0 file contains the entire performance, merged onto a single track, while type 1 files may contain any number of tracks that are performed synchronously. Type 2 files are rarely used and store multiple arrangements, with each arrangement having its own track and intended to be played in sequence. RMID files Microsoft Windows bundles SMFs together with Downloadable Sounds (DLS) in a Resource Interchange File Format (RIFF) wrapper, as RMID files with a .rmi extension. RIFF-RMID has been deprecated in favor of Extensible Music Files (XMF). A MIDI file is not an audio recording. Rather, it is a set of instructionsfor example, for pitch or tempoand can use a thousand times less disk space than the equivalent recorded audio. Due to their tiny filesize, fan-made MIDI arrangements became an attractive way to share music online, before the advent of broadband internet access and multi-gigabyte hard drives. The major drawback to this is the wide variation in quality of users' audio cards, and in the actual audio contained as samples or synthesized sound in the card that the MIDI data only refers to symbolically. Even a sound card that contains high-quality sampled sounds can have inconsistent quality from one sampled instrument to another, Early budget-priced cards, such as the AdLib and the Sound Blaster and its compatibles, used a stripped-down version of Yamaha's frequency modulation synthesis (FM synthesis) technology played back through low-quality digital-to-analog converters. The low-fidelity reproduction of these ubiquitous cards was often assumed to somehow be a property of MIDI itself. This created a perception of MIDI as low-quality audio, while in reality MIDI itself contains no sound, and the quality of its playback depends entirely on the quality of the sound-producing device. Software The main advantage of the personal computer in a MIDI system is that it can serve a number of different purposes, depending on the software that is loaded. Multitasking allows simultaneous operation of programs that may be able to share data with each other. Sequencers Sequencing software allows recorded MIDI data to be manipulated using standard computer editing features such as cut, copy and paste and drag and drop. Keyboard shortcuts can be used to streamline workflow, and, in some systems, editing functions may be invoked by MIDI events. The sequencer allows each channel to be set to play a different sound and gives a graphical overview of the arrangement. A variety of editing tools are made available, including a notation display or scorewriter that can be used to create printed parts for musicians. Tools such as looping, quantization, randomization, and transposition simplify the arranging process. Beat creation is simplified, and groove templates can be used to duplicate another track's rhythmic feel. Realistic expression can be added through the manipulation of real-time controllers. Mixing can be performed, and MIDI can be synchronized with recorded audio and video tracks. Work can be saved, and transported between different computers or studios. Sequencers may take alternate forms, such as drum pattern editors that allow users to create beats by clicking on pattern grids, and loop sequencers such as ACID Pro, which allow MIDI to be combined with prerecorded audio loops whose tempos and keys are matched to each other. Cue-list sequencing is used to trigger dialogue, sound effect, and music cues in stage and broadcast production. Notation software With MIDI, notes played on a keyboard can automatically be transcribed to sheet music. Scorewriting software typically lacks advanced sequencing tools, and is optimized for the creation of a neat, professional printout designed for live instrumentalists. These programs provide support for dynamics and expression markings, chord and lyric display, and complex score styles. Software is available that can print scores in braille. Notation programs include Finale, Encore, Sibelius, MuseScore and Dorico. SmartScore software can produce MIDI files from scanned sheet music. Editor/librarians Patch editors allow users to program their equipment through the computer interface. These became essential with the appearance of complex synthesizers such as the Yamaha FS1R, which contained several thousand programmable parameters, but had an interface that consisted of fifteen tiny buttons, four knobs and a small LCD. Digital instruments typically discourage users from experimentation, due to their lack of the feedback and direct control that switches and knobs would provide, but patch editors give owners of hardware instruments and effects devices the same editing functionality that is available to users of software synthesizers. Some editors are designed for a specific instrument or effects device, while other, universal editors support a variety of equipment, and ideally can control the parameters of every device in a setup through the use of System Exclusive messages. Patch librarians have the specialized function of organizing the sounds in a collection of equipment and exchange entire banks of sounds between an instrument and a computer. In this way the device's limited patch storage is augmented by a computer's much greater disk capacity. Once transferred to the computer, it is possible to share custom patches with other owners of the same instrument. Universal editor/librarians that combine the two functions were once common, and included Opcode Systems' Galaxy and eMagic's SoundDiver. These programs have been largely abandoned with the trend toward computer-based synthesis, although Mark of the Unicorn's (MOTU)'s Unisyn and Sound Quest's Midi Quest remain available. Native Instruments' Kore was an effort to bring the editor/librarian concept into the age of software instruments. Auto-accompaniment programs Programs that can dynamically generate accompaniment tracks are called auto-accompaniment programs. These create a full band arrangement in a style that the user selects, and send the result to a MIDI sound generating device for playback. The generated tracks can be used as educational or practice tools, as accompaniment for live performances, or as a songwriting aid. Synthesis and sampling Computers can use software to generate sounds, which are then passed through a digital-to-analog converter (DAC) to a power amplifier and loudspeaker system. The number of sounds that can be played simultaneously (the polyphony) is dependent on the power of the computer's CPU, as are the sample rate and bit depth of playback, which directly affect the quality of the sound. Synthesizers implemented in software are subject to timing issues that are not necessarily present with hardware instruments, whose dedicated operating systems are not subject to interruption from background tasks as desktop operating systems are. These timing issues can cause synchronization problems, and clicks and pops when sample playback is interrupted. Software synthesizers also may exhibit additional latency in their sound generation. The roots of software synthesis go back as far as the 1950s, when Max Mathews of Bell Labs wrote the MUSIC-N programming language, which was capable of non-real-time sound generation. The first synthesizer to run directly on a host computer's CPU was Reality, by Dave Smith's Seer Systems, which achieved a low latency through tight driver integration, and therefore could run only on Creative Labs soundcards. Some systems use dedicated hardware to reduce the load on the host CPU, as with Symbolic Sound Corporation's Kyma System, and the Creamware/Sonic Core Pulsar/SCOPE systems, which power an entire recording studio's worth of instruments, effect units, and mixers. The ability to construct full MIDI arrangements entirely in computer software allows a composer to render a finalized result directly as an audio file. Game music Early PC games were distributed on floppy disks, and the small size of MIDI files made them a viable means of providing soundtracks. Games of the DOS and early Windows eras typically required compatibility with either Ad Lib or Sound Blaster audio cards. These cards used FM synthesis, which generates sound through modulation of sine waves. John Chowning, the technique's pioneer, theorized that the technology would be capable of accurate recreation of any sound if enough sine waves were used, but budget computer audio cards performed FM synthesis with only two sine waves. Combined with the cards' 8-bit audio, this resulted in a sound described as "artificial" and "primitive". Wavetable daughterboards that were later available provided audio samples that could be used in place of the FM sound. These were expensive, but often used the sounds from respected MIDI instruments such as the E-mu Proteus. The computer industry moved in the mid-1990s toward wavetable-based soundcards with 16-bit playback, but standardized on a 2 MB of wavetable storage, a space too small in which to fit good-quality samples of 128 General MIDI instruments plus drum kits. To make the most of the limited space, some manufacturers stored 12-bit samples and expanded those to 16 bits on playback. Other applications Despite its association with music devices, MIDI can control any electronic or digital device that can read and process a MIDI command. MIDI has been adopted as a control protocol in a number of non-musical applications. MIDI Show Control uses MIDI commands to direct stage lighting systems and to trigger cued events in theatrical productions. VJs and turntablists use it to cue clips, and to synchronize equipment, and recording systems use it for synchronization and automation. Apple Motion allows control of animation parameters through MIDI. The 1987 first-person shooter game MIDI Maze and the 1990 Atari ST computer puzzle game Oxyd used MIDI to network computers together. Devices Connectors The cables terminate in a 180° five-pin DIN connector. Standard applications use only three of the five conductors: a ground wire (pin 2), and a balanced pair of conductors (pins 4 and 5) that carry a +5 volt data signal. This connector configuration can only carry messages in one direction, so a second cable is necessary for two-way communication. Some proprietary applications, such as phantom-powered footswitch controllers, use the spare pins for direct current (DC) power transmission. Opto-isolators keep MIDI devices electrically separated from their MIDI connections, which prevents ground loops and protects equipment from voltage spikes. There is no error detection capability in MIDI, so the maximum cable length is set at to limit interference. Most devices do not copy messages from their input to their output port. A third type of port, the "thru" port, emits a copy of everything received at the input port, allowing data to be forwarded to another instrument in a "daisy chain" arrangement. Not all devices contain thru ports, and devices that lack the ability to generate MIDI data, such as effects units and sound modules, may not include out ports. Management devices Each device in a daisy chain adds delay to the system. This is avoided with a MIDI thru box, which contains several outputs that provide an exact copy of the box's input signal. A MIDI merger is able to combine the input from multiple devices into a single stream, and allows multiple controllers to be connected to a single device. A MIDI switcher allows switching between multiple devices, and eliminates the need to physically repatch cables. MIDI patch bays combine all of these functions. They contain multiple inputs and outputs, and allow any combination of input channels to be routed to any combination of output channels. Routing setups can be created using computer software, stored in memory, and selected by MIDI program change commands. This enables the devices to function as standalone MIDI routers in situations where no computer is present. MIDI patch bays also clean up any skewing of MIDI data bits that occurs at the input stage. MIDI data processors are used for utility tasks and special effects. These include MIDI filters, which remove unwanted MIDI data from the stream, and MIDI delays, effects that send a repeated copy of the input data at a set time. Interfaces A computer MIDI interface's main function is to match clock speeds between the MIDI device and the computer. Some computer sound cards include a standard MIDI connector, whereas others connect by any of various means that include the D-subminiature DA-15 game port, USB, FireWire, Ethernet or a proprietary connection. The increasing use of USB connectors in the 2000s has led to the availability of MIDI-to-USB data interfaces that can transfer MIDI channels to USB-equipped computers. Some MIDI keyboard controllers are equipped with USB jacks, and can be plugged into computers that run music software. MIDI's serial transmission leads to timing problems. A three-byte MIDI message requires nearly 1 millisecond for transmission. Because MIDI is serial, it can only send one event at a time. If an event is sent on two channels at once, the event on the second channel cannot transmit until the first one is finished, and so is delayed by 1 ms. If an event is sent on all channels at the same time, the last channel's transmission is delayed by as much as 16 ms. This contributed to the rise of MIDI interfaces with multiple in- and out-ports, because timing improves when events are spread between multiple ports as opposed to multiple channels on the same port. The term "MIDI slop" refers to audible timing errors that result when MIDI transmission is delayed. Controllers There are two types of MIDI controllers: performance controllers that generate notes and are used to perform music, and controllers that may not send notes, but transmit other types of real-time events. Many devices are some combination of the two types. Keyboards are by far the most common type of MIDI controller. MIDI was designed with keyboards in mind, and any controller that is not a keyboard is considered an "alternative" controller. This was seen as a limitation by composers who were not interested in keyboard-based music, but the standard proved flexible, and MIDI compatibility was introduced to other types of controllers, including guitars, stringed and wind instruments, drums and specialized and experimental controllers. Other controllers include drum controllers and wind controllers, which can emulate the playing of drum kit and wind instruments, respectively. Nevertheless, some features of the keyboard playing for which MIDI was designed do not fully capture other instruments' capabilities; Jaron Lanier cites the standard as an example of technological "lock-in" that unexpectedly limited what was possible to express. Some of these features, such as per-note pitch bend, are to be addressed in MIDI 2.0, described below. Software synthesizers offer great power and versatility, but some players feel that division of attention between a MIDI keyboard and a computer keyboard and mouse robs some of the immediacy from the playing experience. Devices dedicated to real-time MIDI control provide an ergonomic benefit, and can provide a greater sense of connection with the instrument than an interface that is accessed through a mouse or a push-button digital menu. Controllers may be general-purpose devices that are designed to work with a variety of equipment, or they may be designed to work with a specific piece of software. Examples of the latter include Akai's APC40 controller for Ableton Live, and Korg's MS-20ic controller that is a reproduction of their MS-20 analog synthesizer. The MS-20ic controller includes patch cables that can be used to control signal routing in their virtual reproduction of the MS-20 synthesizer, and can also control third-party devices. Instruments A MIDI instrument contains ports to send and receive MIDI signals, a CPU to process those signals, an interface that allows user programming, audio circuitry to generate sound, and controllers. The operating system and factory sounds are often stored in a Read-only memory (ROM) unit. A MIDI instrument can also be a stand-alone module (without a piano style keyboard) consisting of a General MIDI soundboard (GM, GS and XG), onboard editing, including transposing/pitch changes, MIDI instrument changes and adjusting volume, pan, reverb levels and other MIDI controllers. Typically, the MIDI Module includes a large screen, so the user can view information for the currently selected function. Features can include scrolling lyrics, usually embedded in a MIDI file or karaoke MIDI, playlists, song library and editing screens. Some MIDI Modules include a Harmonizer and the ability to playback and transpose MP3 audio files. Synthesizers Synthesizers may employ any of a variety of sound generation techniques. They may include an integrated keyboard, or may exist as "sound modules" or "expanders" that generate sounds when triggered by an external controller, such as a MIDI keyboard. Sound modules are typically designed to be mounted in a 19-inch rack. Manufacturers commonly produce a synthesizer in both standalone and rack-mounted versions, and often offer the keyboard version in a variety of sizes. Samplers A sampler can record and digitize audio, store it in random-access memory (RAM), and play it back. Samplers typically allow a user to edit a sample and save it to a hard disk, apply effects to it, and shape it with the same tools that synthesizers use. They also may be available in either keyboard or rack-mounted form. Instruments that generate sounds through sample playback, but have no recording capabilities, are known as "ROMplers". Samplers did not become established as viable MIDI instruments as quickly as synthesizers did, due to the expense of memory and processing power at the time. The first low-cost MIDI sampler was the Ensoniq Mirage, introduced in 1984. MIDI samplers are typically limited by displays that are too small to use to edit sampled waveforms, although some can be connected to a computer monitor. Drum machines Drum machines typically are sample playback devices that specialize in drum and percussion sounds. They commonly contain a sequencer that allows the creation of drum patterns, and allows them to be arranged into a song. There often are multiple audio outputs, so that each sound or group of sounds can be routed to a separate output. The individual drum voices may be playable from another MIDI instrument, or from a sequencer. Workstations and hardware sequencers Sequencer technology predates MIDI. Analog sequencers use CV/Gate signals to control pre-MIDI analog synthesizers. MIDI sequencers typically are operated by transport features modeled after those of tape decks. They are capable of recording MIDI performances, and arranging them into individual tracks along a multitrack recording concept. Music workstations combine controller keyboards with an internal sound generator and a sequencer. These can be used to build complete arrangements and play them back using their own internal sounds, and function as self-contained music production studios. They commonly include file storage and transfer capabilities. Effects devices Some effects units can be remotely controlled via MIDI. For example, the Eventide H3000 Ultra-harmonizer allows such extensive MIDI control that it is playable as a synthesizer. The Drum Buddy, a pedal-format drum machine, has a MIDI connection so that it can have its tempo synchronized with a looper pedal or time-based effects such as delay. Technical specifications MIDI messages are made up of 8-bit words (commonly called bytes) that are transmitted serially at a rate of 31.25 kbit/s. This rate was chosen because it is an exact division of 1 MHz, the operational speed of many early microprocessors. The first bit of each word identifies whether the word is a status byte or a data byte, and is followed by seven bits of information. A start bit and a stop bit are added to each byte for framing purposes, so a MIDI byte requires ten bits for transmission. A MIDI link can carry sixteen independent channels of information. The channels are numbered 1–16, but their actual corresponding binary encoding is 0–15. A device can be configured to only listen to specific channels and to ignore the messages sent on other channels ("Omni Off" mode), or it can listen to all channels, effectively ignoring the channel address ("Omni On"). An individual device may be monophonic (the start of a new "note-on" MIDI command implies the termination of the previous note), or polyphonic (multiple notes may be sounding at once, until the polyphony limit of the instrument is reached, or the notes reach the end of their decay envelope, or explicit "note-off" MIDI commands are received). Receiving devices can typically be set to all four combinations of "omni off/on" versus "mono/poly" modes. Messages A MIDI message is an instruction that controls some aspect of the receiving device. A MIDI message consists of a status byte, which indicates the type of the message, followed by up to two data bytes that contain the parameters. MIDI messages can be channel messages sent on only one of the 16 channels and monitored only by devices on that channel, or system messages that all devices receive. Each receiving device ignores data not relevant to its function. There are five types of message: Channel Voice, Channel Mode, System Common, System Real-Time, and System Exclusive. Channel Voice messages transmit real-time performance data over a single channel. Examples include "note-on" messages which contain a MIDI note number that specifies the note's pitch, a velocity value that indicates how forcefully the note was played, and the channel number; "note-off" messages that end a note; program change messages that change a device's patch; and control changes that allow adjustment of an instrument's parameters. MIDI notes are numbered from 0 to 127 assigned to C−1 to G9. This corresponds to a range of 8.175799 to 12543.85 Hz (assuming equal temperament and 440 Hz A4) and extends beyond the 88 note piano range from A0 to C8. System Exclusive messages System Exclusive (SysEx) messages are a major reason for the flexibility and longevity of the MIDI standard. Manufacturers use them to create proprietary messages that control their equipment more thoroughly than standard MIDI messages could. SysEx messages are addressed to a specific device in a system. Each manufacturer has a unique identifier that is included in its SysEx messages, which helps ensure that only the targeted device responds to the message, and that all others ignore it. Many instruments also include a SysEx ID setting, so a controller can address two devices of the same model independently. SysEx messages can include functionality beyond what the MIDI standard provides. They target a specific instrument, and are ignored by all other devices on the system. Implementation chart Devices typically do not respond to every type of message defined by the MIDI specification. The MIDI implementation chart was standardized by the MMA as a way for users to see what specific capabilities an instrument has, and how it responds to messages. A specific MIDI Implementation Chart is usually published for each MIDI device within the device documentation. Electrical specifications The MIDI 1.0 specification for the electrical interface is based on a fully isolated current loop. The MIDI out port nominally sources a +5 volt source through a 220 ohm resistor out through pin 4 on the MIDI out DIN connector, in on pin 4 of the receiving device's MIDI in DIN connector, through a 220 ohm protection resistor and the LED of an opto-isolator. The current then returns via pin 5 on the MIDI in port to the originating device's MIDI out port pin 5, again with a 220 ohm resistor in the path, giving a nominal current of about 5 milliamperes. Despite the cable's appearance, there is no conductive path between the two MIDI devices, only an optically isolated one. Properly designed MIDI devices are relatively immune to ground loops and similar interference. The baud rate on this system is 31,250 symbols per second, logic 0 being current on. The MIDI specification provides for a ground "wire" and a braid or foil shield, connected on pin 2, protecting the two signal-carrying conductors on pins 4 and 5. Although the MIDI cable is supposed to connect pin 2 and the braid or foil shield to chassis ground, it should do so only at the MIDI out port; the MIDI in port should leave pin 2 unconnected and isolated. Some large manufacturers of MIDI devices use modified MIDI in-only DIN 5-pin sockets with the metallic conductors intentionally omitted at pin positions 1, 2, and 3 so that the maximum voltage isolation is obtained. Extensions MIDI's flexibility and widespread adoption have led to many refinements of the standard, and have enabled its application to purposes beyond those for which it was originally intended. General MIDI MIDI allows selection of an instrument's sounds through program change messages, but there is no guarantee that any two instruments have the same sound at a given program location. Program #0 may be a piano on one instrument, or a flute on another. The General MIDI (GM) standard was established in 1991, and provides a standardized sound bank that allows a Standard MIDI File created on one device to sound similar when played back on another. GM specifies a bank of 128 sounds arranged into 16 families of eight related instruments, and assigns a specific program number to each instrument. Percussion instruments are placed on channel 10, and a specific MIDI note value is mapped to each percussion sound. GM-compliant devices must offer 24-note polyphony. Any given program change selects the same instrument sound on any GM-compatible instrument. General MIDI is defined by a standard layout of defined instrument sounds called 'patches', defined by a 'patch' number (program number – PC#) and triggered by pressing a key on a MIDI keyboard. This layout ensures MIDI sound modules and other MIDI devices faithfully reproduce the designated sounds expected by the user and maintains reliable and consistent sound palettes across different manufacturers MIDI devices. The GM standard eliminates variation in note mapping. Some manufacturers had disagreed over what note number should represent middle C, but GM specifies that note number 69 plays A440, which in turn fixes middle C as note number 60. GM-compatible devices are required to respond to velocity, aftertouch, and pitch bend, to be set to specified default values at startup, and to support certain controller numbers such as for sustain pedal, and Registered Parameter Numbers. A simplified version of GM, called GM Lite, is used in mobile phones and other devices with limited processing power. GS, XG, and GM2 A general opinion quickly formed that the GM's 128-instrument sound set was not large enough. Roland's General Standard, or GS, system included additional sounds, drumkits and effects, provided a "bank select" command that could be used to access them, and used MIDI Non-Registered Parameter Numbers (NRPNs) to access its new features. Yamaha's Extended General MIDI, or XG, followed in 1994. XG similarly offered extra sounds, drumkits and effects, but used standard controllers instead of NRPNs for editing, and increased polyphony to 32 voices. Both standards feature backward compatibility with the GM specification, but are not compatible with each other. Neither standard has been adopted beyond its creator, but both are commonly supported by music software titles. Member companies of Japan's AMEI developed the General MIDI Level 2 specification in 1999. GM2 maintains backward compatibility with GM, but increases polyphony to 32 voices, standardizes several controller numbers such as for sostenuto and soft pedal (una corda), RPNs and Universal System Exclusive Messages, and incorporates the MIDI Tuning Standard. GM2 is the basis of the instrument selection mechanism in Scalable Polyphony MIDI (SP-MIDI), a MIDI variant for low power devices that allows the device's polyphony to scale according to its processing power. Tuning standard Most MIDI synthesizers use equal temperament tuning. The MIDI tuning standard (MTS), ratified in 1992, allows alternate tunings. MTS allows microtunings that can be loaded from a bank of up to 128 patches, and allows real-time adjustment of note pitches. Manufacturers are not required to support the standard. Those who do are not required to implement all of its features. Time code A sequencer can drive a MIDI system with its internal clock, but when a system contains multiple sequencers, they must synchronize to a common clock. MIDI Time Code (MTC), developed by Digidesign, implements SysEx messages that have been developed specifically for timing purposes, and is able to translate to and from the SMPTE time code standard. MIDI Clock is based on tempo, but SMPTE time code is based on frames per second, and is independent of tempo. MTC, like SMPTE code, includes position information, and can adjust itself if a timing pulse is lost. MIDI interfaces such as Mark of the Unicorn's MIDI Timepiece can convert SMPTE code to MTC. Machine control MIDI Machine Control (MMC) consists of a set of SysEx commands that operate the transport controls of hardware recording devices. MMC lets a sequencer send Start, Stop, and Record commands to a connected tape deck or hard disk recording system, and to fast-forward or rewind the device so that it starts playback at the same point as the sequencer. No synchronization data is involved, although the devices may synchronize through MTC. Show control MIDI Show Control (MSC) is a set of SysEx commands for sequencing and remotely cueing show control devices such as lighting, music and sound playback, and motion control systems. Applications include stage productions, museum exhibits, recording studio control systems, and amusement park attractions. Timestamping One solution to MIDI timing problems is to mark MIDI events with the times they are to be played, and store them in a buffer in the MIDI interface ahead of time. Sending data beforehand reduces the likelihood that a busy passage can send a large amount of information that overwhelms the transmission link. Once stored in the interface, the information is no longer subject to timing issues associated with USB jitter and computer operating system interrupts, and can be transmitted with a high degree of accuracy. MIDI timestamping only works when both hardware and software support it. MOTU's MTS, eMagic's AMT, and Steinberg's Midex 8 had implementations that were incompatible with each other, and required users to own software and hardware manufactured by the same company to work. Timestamping is built into FireWire MIDI interfaces, Mac OS X Core Audio, and Linux ALSA Sequencer. Sample dump standard An unforeseen capability of SysEx messages was their use for transporting audio samples between instruments. This led to the development of the sample dump standard (SDS), which established a new SysEx format for sample transmission. The SDS was later augmented with a pair of commands that allow the transmission of information about sample loop points, without requiring that the entire sample be transmitted. Downloadable sounds The Downloadable Sounds (DLS) specification, ratified in 1997, allows mobile devices and computer sound cards to expand their wave tables with downloadable sound sets. The DLS Level 2 Specification followed in 2006, and defined a standardized synthesizer architecture. The Mobile DLS standard calls for DLS banks to be combined with SP-MIDI, as self-contained Mobile XMF files. MIDI Polyphonic Expression MIDI Polyphonic Expression (MPE) is a method of using MIDI that enables pitch bend, and other dimensions of expressive control, to be adjusted continuously for individual notes. MPE works by assigning each note to its own MIDI channel so that particular messages can be applied to each note individually. The specifications were released in November 2017 by AMEI and in January 2018 by the MMA. Instruments like the Continuum Fingerboard, LinnStrument, ROLI Seaboard, Sensel Morph, and Eigenharp let users control pitch, timbre, and other nuances for individual notes within chords. Alternative hardware transports In addition to the original 31.25 kbit/s current-loop transported on 5-pin DIN, other connectors have been used for the same electrical data, and transmission of MIDI streams in different forms over USB, IEEE 1394 a.k.a. FireWire, and Ethernet is now common. Some samplers and hard drive recorders can also pass MIDI data between each other over SCSI. USB and FireWire Members of the USB-IF in 1999 developed a standard for MIDI over USB, the "Universal Serial Bus Device Class Definition for MIDI Devices" MIDI over USB has become increasingly common as other interfaces that had been used for MIDI connections (serial, joystick, etc.) disappeared from personal computers. Linux, Microsoft Windows, Macintosh OS X, and Apple iOS operating systems include standard class drivers to support devices that use the "Universal Serial Bus Device Class Definition for MIDI Devices". Some manufacturers choose to implement a MIDI interface over USB that is designed to operate differently from the class specification, using custom drivers. Apple Computer developed the FireWire interface during the 1990s. It began to appear on digital video cameras toward the end of the decade, and on G3 Macintosh models in 1999. It was created for use with multimedia applications. Unlike USB, FireWire uses intelligent controllers that can manage their own transmission without attention from the main CPU. As with standard MIDI devices, FireWire devices can communicate with each other with no computer present. XLR connectors The Octave-Plateau Voyetra-8 synthesizer was an early MIDI implementation using XLR3 connectors in place of the 5-pin DIN. It was released in the pre-MIDI years and later retrofitted with a MIDI interface but keeping its XLR connector. Serial parallel, and joystick port As computer-based studio setups became common, MIDI devices that could connect directly to a computer became available. These typically used the 8-pin mini-DIN connector that was used by Apple for serial ports prior to the introduction of the Blue & White G3 models. MIDI interfaces intended for use as the centerpiece of a studio, such as the Mark of the Unicorn MIDI Time Piece, were made possible by a "fast" transmission mode that could take advantage of these serial ports' ability to operate at 20 times the standard MIDI speed. Mini-DIN ports were built into some late-1990s MIDI instruments, and enabled such devices to be connected directly to a computer. Some devices connected via PCs' DB-25 parallel port, or through the joystick port found in many PC sound cards. mLAN Yamaha introduced the mLAN protocol in 1999. It was conceived as a Local Area Network for musical instruments using FireWire as the transport, and was designed to carry multiple MIDI channels together with multichannel digital audio, data file transfers, and time code. mLan was used in a number of Yamaha products, notably digital mixing consoles and the Motif synthesizer, and in third-party products such as the PreSonus FIREstation and the Korg Triton Studio. No new mLan products have been released since 2007. Ethernet and Internet Computer network implementations of MIDI provide network routing capabilities, and the high-bandwidth channel that earlier alternatives to MIDI, such as ZIPI, were intended to bring. Proprietary implementations have existed since the 1980s, some of which use fiber optic cables for transmission. The Internet Engineering Task Force's RTP-MIDI open specification has gained industry support. Apple has supported this protocol from Mac OS X 10.4 onwards, and a Windows driver based on Apple's implementation exists for Windows XP and newer versions. Wireless Systems for wireless MIDI transmission have been available since the 1980s. Several commercially available transmitters allow wireless transmission of MIDI and OSC signals over Wi-Fi and Bluetooth. iOS devices are able to function as MIDI control surfaces, using Wi-Fi and OSC. An XBee radio can be used to build a wireless MIDI transceiver as a do-it-yourself project. Android devices are able to function as full MIDI control surfaces using several different protocols over Wi-Fi and Bluetooth. TRS minijack Some devices use standard 3.5 mm TRS audio minijack connectors for MIDI data, including the Korg Electribe 2 and the Arturia Beatstep Pro. Both come with adaptors that break out to standard 5-pin DIN connectors.. This became widespread enough that the Midi Manufacturers' Association standardized the wiring. The MIDI-over-minijack standards document also recommends the use of 2.5 mm connectors over 3.5 mm ones to avoid confusion with audio connectors. MIDI 2.0 The MIDI 2.0 standard was presented on 17 January 2020 at the Winter NAMM Show in Anaheim, California at a session titled "Strategic Overview and Introduction to MIDI 2.0" by representatives Yamaha, Roli, Microsoft, Google, and the MIDI Association. This significant update adds bidirectional communication while maintaining backwards compatibility. The new protocol has been researched since 2005. Prototype devices have been shown privately at NAMM using wired and wireless connections and licensing and product certification policies have been developed; however, no projected release date was announced. Proposed physical layer and transport layer included Ethernet-based protocols such as RTP MIDI and Audio Video Bridging/Time-Sensitive Networking, as well as User Datagram Protocol (UDP)-based transport . AMEI and MMA announced that complete specifications will be published following interoperability testing of prototype implementations from major manufacturers such as Google, Yamaha, Steinberg, Roland, Ableton, Native Instruments, and ROLI, among others. In January 2020, Roland announced the A-88mkII controller keyboard that supports MIDI 2.0. MIDI 2.0 includes MIDI Capability Inquiry specification for property exchange and profiles, and the new Universal MIDI Packet format for high-speed transports which supports both MIDI 1.0 and MIDI 2.0 voice messages. MIDI Capability Inquiry MIDI Capability Inquiry (MIDI-CI) specifies Universal SysEx messages to implement device profiles, parameter exchange, and MIDI protocol negotiation. The specifications were released in November 2017 by AMEI and in January 2018 by the MMA. Parameter exchange defines methods for inquiry of device capabilities, such as supported controllers, patch names, instrument profiles, device configuration and other metadata, and to get or set device configuration settings. Property exchange uses System Exclusive messages that carry JSON format data. Profiles define common sets of MIDI controllers for various instrument types, such as drawbar organs and analog synths, or for particular tasks, improving interoperability between instruments from different manufacturers. Protocol negotiation allows devices to employ the Next Generation protocol or manufacturer-specific protocols. Universal MIDI Packet MIDI 2.0 defines a new Universal MIDI Packet format, which contains messages of varying length (32, 64, 96 or 128 bits) depending on the payload type. This new packet format supports a total of 256 MIDI channels, organized in 16 groups of 16 channels; each group can carry either a MIDI 1.0 Protocol stream or new MIDI 2.0 Protocol stream, and can also include system messages, system exclusive data, and timestamps for precise rendering of several simultaneous notes. To simplify initial adoption, existing products are explicitly allowed to only implement MIDI 1.0 messages. The Universal MIDI Packet is intended for high-speed transport such as USB and Ethernet and is not supported on the existing 5-pin DIN connections. System Real-Time and System Common messages are the same as defined in MIDI 1.0. New protocol As of January 2019, the draft specification of the new protocol supports all core messages that also exist in MIDI 1.0, but extends their precision and resolution; it also defines many new high-precision controller messages. The specification defines default translation rules to convert between MIDI 2.0 Channel Voice and MIDI 1.0 Channel Voice messages that use different data resolution, as well as map 256 MIDI 2.0 streams to 16 MIDI 1.0 streams. Data transfer formats System Exclusive 8 messages use a new 8-bit data format, based on Universal System Exclusive messages. Mixed Data Set messages are intended to transfer large sets of data. System Exclusive 7 messages use the previous 7-bit data format. See also ABC notation Digital piano Electronic drum module Guitar synthesizer List of music software MIDI mockup MusicXML Music Macro Language Open Sound Control SoundFont Scorewriter Synthesia Synthetic music mobile application format Notes References External links The MIDI Association You can download English-language MIDI specifications at the MIDI Manufacturers Association Computer hardware standards Electronic music Japanese inventions Serial buses
432962
https://en.wikipedia.org/wiki/UnionFS
UnionFS
Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which implements a union mount for other file systems. It allows files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system. Contents of directories which have the same path within the merged branches will be seen together in a single merged directory, within the new, virtual filesystem. When mounting branches, the priority of one branch over the other is specified. So when both branches contain a file with the same name, one gets priority over the other. The different branches may be either read-only or read/write file systems, so that writes to the virtual, merged copy are directed to a specific real file system. This allows a file system to appear as writable, but without actually allowing writes to change the file system, also known as copy-on-write. This may be desirable when the media is physically read-only, such as in the case of Live CDs. Unionfs was originally developed by Professor Erez Zadok and his team at Stony Brook University. Uses In Knoppix, a union between the file system on the CD-ROM or DVD and a file system contained in an image file called knoppix.img (knoppix-data.img for Knoppix 7) on a writable drive (such as a USB memory stick) can be made, where the writable drive has priority over the read-only filesystem. This allows the user to change any of the files on the system, with the new file stored in the image and transparently used instead of the one on the CD. Unionfs can also be used to create a single common template for a number of file systems, or for security reasons. It is sometimes used as an ad hoc snapshotting system. Docker uses file systems inspired by Unionfs, such as Aufs, to layer Docker images. As actions are done to a base image, layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker's lightweight images, as only layer updates need to be propagated (compared to full VMs, for example). UbuntuLTSP, the Linux Terminal Server Project implementation for Ubuntu, uses Unionfs when PXE booting thin or thick clients. Other implementations Unionfs for Linux has two versions. Version 1.x is a standalone one that can be built as a module. Version 2.x is a newer, redesigned, and reimplemented one. aufs is an alternative version of unionfs. overlayfs written by Miklos Szeredi has been used in OpenWRT and considered by Ubuntu and has been merged into the mainline Linux kernel on 26 October 2014 after many years of development and discussion for version 3.18 of the kernel. unionfs-fuse is an independent project, implemented as a user space filesystem program, instead of a kernel module or patch. Like Unionfs, it supports copy-on-write and read-only or read–write branches. Plan 9 from Bell Labs operating system uses union mounts extensively to build custom namespaces per user or processes. Union mounts have also been available in BSD since at least 1995. The GNU Hurd has an implementation of Unionfs. As of January 2008, it works, but results in a read-only mount-point. mhddfs works like Unionfs but permits balancing files over drives with the most free space available. It is implemented as a user space filesystem. mergerfs is a FUSE based union filesystem which offers multiple policies for accessing and writing files as well as other advanced features (xattrs, managing mixed RO and RW drives, link CoW, etc.). Sun Microsystems introduced the first implementation of a stacked, layered file system with copy-on-write, whiteouts (hiding files in lower layers from higher layers), etc. as the Translucent File Service in SunOS 3, circa 1986. JailbreakMe 3.0, a tool for jailbreaking iOS devices released in July 2011, uses unionfs techniques to speed up the installation process of the operating system modification. See also OverlayFS Aufs References External links – A FUSE-based alternative implementation of Unionfs FunionFS – Another FUSE-based implementation of Unionfs The new unionfs implementation for FreeBSD and status of merging (2007-10-23) On Incremental File System Development LUFS-based unionfs for Linux (based on LUFS) DENX U-Boot and Linux Guide: Overlay File Systems Free special-purpose file systems File systems supported by the Linux kernel Union file systems
3390806
https://en.wikipedia.org/wiki/Yahoo%21%20Assistant
Yahoo! Assistant
Yahoo Assistant, formerly named 3721 Internet Assistant, is a Browser Helper Object for Internet Explorer developed by Beijing 3721 Technology Co. Ltd, and was renamed to Yahoo Assistant after Beijing 3721 Technology was acquired by Yahoo. 3721 Internet Assistant, together with 3721 Chinese Keywords, are known as Spyware by Microsoft AntiSpyware, and malware or browser hijacker by some others, such as Panda Antivirus. However, Yahoo China filed a lawsuit against Beijing Sanjiwuxian Internet Technology Co. Ltd, the developer of the 360Safe antispyware for identifying Yahoo Assistant as malware in 360Safe. Distribution 3721 Internet Assistant was originally released as a normal client-server application. However, it turned to use ActiveX technology to install itself on a client system later and was also shipped with many sharewares as default install options. 3721 Internet Assistant was also blamed for its use of a flaw in Microsoft Internet Explorer to install itself automatically when a user is browsing an array of 3721 sponsored personal and commercial websites with Microsoft Internet Explorer. Yahoo! Assistant is also included in 3721 Chinese Keywords and Yahoo! Mail Express, but sometimes the whole package of Internet Assistant, Chinese Keywords and Mail Express is named "Yahoo Assistant" in some sharewares. The company says the automatic installation ended in September 2005 and now asks user's permission before installing, however, CA Inc. reported that during Yahoo! Assistant installation, extra components are installed without obtaining user's consent. This software is also bundled with the Chinese client of the CGA Gaming platform. Features 3721 claims 3721 Internet Assistant includes many useful features, such as IE setting repair, security shield, removal of internet history information and blocking ads. However, it installs various windows hooks that will slow down the system, and tries to install the hooks repeatedly. Some users also reported that Internet Assistant buttons reappeared immediately after their manual removal using Internet Explorer customization features, and Blue Screen of Death appeared when using Internet Assistant. Internet Explorer extension hijacking 3721 Internet Assistant will enable/disable other Internet Explorer extensions, except the advertisement links and extensions installed by Yahoo products. Concealment and resistance to user termination 3721 Internet Assistant runs under multiple rundll32.exe processes. If one of them is killed in Windows Task Manager, it will immediately be restarted by others, thereby resisting efforts by a user to terminate the application. A driver named CnsMinKP.sys is installed with 3721 Internet Assistant, along with several hidden Windows services. After uninstallation, several files are left on the system, but they are not visible in Windows Explorer. They can be found by using tools such as Total Commander or in the DOS box. Removal of antispyware program Yahoo Assistant also removes 360Safe, an antispyware program of a competitor, without notifying the user. On August 15, 2007, a Beijing court ruled this behavior as unfair competition. Uninstall 3721 Internet Assistant, together with 3721 Chinese Keywords, according to Interfax, are regarded by Chinese internet users as "Hooligan" or "Zombie" applications. The uninstall program of the pair provided by 3721 simply redirects users to the 3721 website (in Simplified Chinese thus not recognizable except by Chinese speakers), and the default option of the web page is to keep 3721 Internet Assistant after the uninstallation. After following the web uninstallation wizard and a reboot, many 3721 files will still remain on the client system. The pair were ranked #1 by Beijing Association of Online Media in its list of Chinese Malware at 2005. References External links an introduction on ca.com Rootkits Assistant
13949938
https://en.wikipedia.org/wiki/Intel%20Atom
Intel Atom
Intel Atom is the brand name for a line of IA-32 and x86-64 instruction set ultra-low-voltage microprocessors by Intel Corporation designed to reduce electric consumption and power dissipation in comparison with ordinary processors of the Intel Core series. Atom is mainly used in netbooks, nettops, embedded applications ranging from health care to advanced robotics, and mobile Internet devices (MIDs). The line was originally designed in 45 nm complementary metal–oxide–semiconductor (CMOS) technology and subsequent models, codenamed Cedar, used a 32 nm process. The first generation of Atom processors are based on the Bonnell microarchitecture. On December 21, 2009, Intel announced the Pine Trail platform, including new Atom processor code-named Pineview (Atom N450), with total kit power consumption down 20%. On December 28, 2011, Intel updated the Atom line with the Cedar processors. In December 2012, Intel launched the 64-bit Centerton family of Atom CPUs, designed specifically for use in servers. Centerton adds features previously unavailable in Atom processors, such as Intel VT virtualization technology and support for ECC memory. On September 4, 2013, Intel launched a 22 nm successor to Centerton, codenamed Avoton. History Intel Atom is a direct successor of the Intel A100 and A110 low-power microprocessors (code-named Stealey), which were built on a 90 nm process, had 512 kB L2 cache and ran at 600 MHz/800 MHz with 3 W TDP (Thermal Design Power). Prior to the Silverthorne announcement, outside sources had speculated that Atom would compete with AMD's Geode system-on-a-chip processors, used by the One Laptop per Child (OLPC) project, and other cost and power sensitive applications for x86 processors. However, Intel revealed on October 15, 2007, that it was developing another new mobile processor, codenamed Diamondville, for OLPC-type devices. "Atom" was the name under which Silverthorne would be sold, while the supporting chipset formerly code-named Menlow was called Centrino Atom. At Spring Intel Developer Forum (IDF) 2008 in Shanghai, Intel officially announced that Silverthorne and Diamondville are based on the same microarchitecture. Silverthorne would be called the Atom Z5xx series and Diamondville would be called the Atom N2xx series. The more expensive lower-power Silverthorne parts was to be used in Intel mobile Internet devices (MIDs) whereas Diamondville was to be used in low-cost desktop and notebooks. Several Mini-ITX motherboard samples have also been revealed. Intel and Lenovo also jointly announced an Atom powered MID called the IdeaPad U8. In April 2008, a MID development kit was announced by Sophia Systems and the first board called CoreExpress-ECO was revealed by a German company LiPPERT Embedded Computers, GmbH. Intel offers Atom based motherboards. In December 2012, Intel released Atom for servers, the S1200 series. The primary difference between these processors and all prior versions, is that ECC memory support has been added, enabling the use of the Atom in mission-critical server environments that demand redundancy and memory failure protection. Availability Atom processors became available to system manufacturers in 2008. Because they are soldered onto a mainboard, like northbridges and southbridges, Atom processors are not available to home users or system builders as separate processors, although they may be obtained preinstalled on some ITX motherboards. The Diamondville and Pineview Atom is used in the HP Mini Series, Asus N10, Lenovo IdeaPad S10, Acer Aspire One & Packard Bell's "dot" (ZG5), recent ASUS Eee PC systems, Sony VAIO M-series, AMtek Elego, Dell Inspiron Mini Series, Gigabyte M912, LG X Series, Samsung NC10, Sylvania g Netbook Meso, Toshiba NB series (100, 200, 205, 255, 300, 500, 505), MSI Wind PC netbooks, RedFox Wizbook 1020i, Sony Vaio X Series, Zenith Z-Book, a range of Aleutia desktops, Magic W3, Archos and the ICP-DAS LP-8381-Atom. The Pineview line is also used in multiple AAC devices for the disabled individual who is unable to speak and the AAC device assists the user in everyday communication with dedicated speech software. Marketing Intel has applied the Atom branding to product lines targeting several different market segments, including: MID/UMPC/Smartphone, Netbook/Nettop, Tablet, Embedded, Wireless Base Stations (for 5G networking infrastructure), Microserver/Server and Consumer electronics. Intel consumer electronic (CE) SoCs are marketed under the Atom brand. Prior to application of the Atom brand, there were number of Intel CE SoCs including: Olo River (CE 2110 which had an XScale ARM architecture) and Canmore (CE 3100 which like Stealey and Tolapai had a 90 nm Pentium M microarchitecture). Intel Atom CE branded SoCs include: Sodaville, Groveland, and Berryville. Instruction set architecture All Atom processors implement the x86 (IA-32) instruction set; however, support for the AMD 64 instruction set was not added until the desktop Diamondville and desktop and mobile Pineview cores. The Atom N2xx and Z5xx series Atom models cannot run x86-64 code. The Centerton server processors will support the Intel 64 instruction set. Intel states the Atom supports 64-bit operation only "with a processor, chipset, BIOS" that all support Intel 64. Those Atom systems not supporting all of these cannot enable Intel 64. As a result, the ability of an Atom-based system to run 64-bit versions of operating systems may vary from one motherboard to another. Online retailer mini-itx.com has tested Atom-based motherboards made by Intel and Jetway, and while they were able to install 64-bit versions of Linux on Intel-branded motherboards with D2700 (Cedarview; supports maximum of 4 GB memory DDR3-800/1066) processors, Intel 64 support was not enabled on a Jetway-branded motherboard with a D2550 (Cedarview) processor. Even among Atom-based systems which have Intel 64 enabled, not all are able to run 64-bit versions of Microsoft Windows. For those Pineview processors which support 64-bit operation, Intel Download Center currently provides 64-bit Windows Vista and Windows 7 drivers for Intel GMA 3150 graphics, found in Pineview processors. However, no 64-bit Windows drivers are available for Intel Atom Cedarview processors, released Q3 2011. However, Intel's Bay Trail-M processors, built on the Silvermont microarchitecture and released in the second half of 2013, regain 64-bit support, although driver support for Linux and Windows 7 is limited at launch. The lack of 64-bit Windows support for Cedarview processors appears to be due to a driver issue. A member of the Intel Enthusiast Team has stated in a series of posts on enthusiast site Tom's Hardware that while the Atom D2700 (Cedarview) was designed with Intel 64 support, due to a "limitation of the board" Intel had pulled their previously available 64-bit drivers for Windows 7 and would not provide any further 64-bit support. Some system manufacturers have similarly stated that their motherboards with Atom Cedarview processors lack 64-bit support due to a "lack of Intel® 64-bit VGA driver support". Because all Cedarview processors use the same Intel GMA 3600 or 3650 graphics as the D2700, this indicates that Atom Cedarview systems will remain unable to run 64-bit versions of Windows, even those which have Intel 64 enabled and are able to run 64-bit versions of Linux. Microarchitecture The first Atom processors were based on the Bonnell microarchitecture. Those Atom processors are able to execute up to two instructions per cycle. Like many other x86 microprocessors, they translate x86-instructions (CISC instructions) into simpler internal operations (sometimes referred to as micro-ops, i.e., effectively RISC style instructions) prior to execution. The majority of instructions produce one micro-op when translated, with around 4% of instructions used in typical programs producing multiple micro-ops. The number of instructions that produce more than one micro-op is significantly fewer than the P6 and NetBurst microarchitectures. In the Bonnell microarchitecture, internal micro-ops can contain both a memory load and a memory store in connection with an ALU operation, thus being more similar to the x86 level and more powerful than the micro-ops used in previous designs. This enables relatively good performance with only two integer ALUs, and without any instruction reordering, speculative execution, or register renaming. The Bonnell microarchitecture therefore represents a partial revival of the principles used in earlier Intel designs such as P5 and the i486, with the sole purpose of enhancing the performance per watt ratio. However, Hyper-Threading is implemented in an easy (i.e., low power) way to employ the whole pipeline efficiently by avoiding typical single thread dependencies. Atom branded processors have historically featured the following microarchitectures: Bonnell Saltwell Silvermont Airmont Goldmont Goldmont Plus Tremont Gracemont Performance The performance of a single-core Atom is about half that of a Pentium M of the same clock rate. For example, the Atom N270 (1.60 GHz) found in many netbooks such as the Eee PC can deliver around 3300 MIPS and 2.1 GFLOPS in standard benchmarks, compared to 7400 MIPS and 3.9 GFLOPS for the similarly clocked (1.73 GHz) Pentium M 740. The Pineview platform has proven to be only slightly faster than the previous Diamondville platform. This is because the Pineview platform uses the same Bonnell execution core as Diamondville and is connected to the memory controller via the FSB, hence memory latency and performance in CPU-intensive applications are minimally improved. Collaborations In March 2009, Intel announced that it would be collaborating with TSMC for the production of the Atom processors. The deal was put on hold due to lack of demand in 2010. On September 13, 2011, Intel and Google held a joint announcement of a partnership to provide support in Google's Android operating system for Intel processors (beginning with the Atom). This would allow Intel to supply chips for the growing smartphone and tablet market. Based on this collaboration, in 2012, Intel announced a new system on chip (SoC) platform designed for smartphones and tablets which would use the Atom line of CPUs. It was a continuation of the partnership announced by Intel and Google on September 13, 2011, to provide support for the Android operating system on Intel x86 processors. This range competed with existing SoCs developed for the smartphone and tablet market from companies like Texas Instruments, Nvidia, Qualcomm and Samsung. On April 29, 2016, Intel announced the decision to cancel the Broxton SoC for smartphones and tablets. Broxton was to use the newest Atom microarchitecture (Goldmont on a 14 nm node) in combination with an Intel modem. Competition Embedded processors based on the ARM version 7 instruction set architecture (such as Nvidia's Tegra 3 series, TI's 4 series and Freescale's i.MX51 based on the Cortex-A8 core, or the Qualcomm Snapdragon and Marvell Armada 500/600 based on custom ARMv7 implementations) offer similar performance to the low end Atom chipsets but at roughly one quarter the power consumption, and (like most ARM systems) as a single integrated system on a chip, rather than a two chip solution like the current Atom line. Although the second-generation Atom codenamed "Pineview" should greatly increase its competitiveness in performance/watt, ARM plans to counter the threat with the multi-core capable Cortex-A9 core as used in Nvidia's Tegra 2/3, TI's OMAP 4 series, and Qualcomm's next-generation Snapdragon series, among others. The Nano and Nano Dual-Core series from VIA is slightly above the average thermal envelope of the Atom, but offers hardware AES support, random number generators, and out-of-order execution. Performance comparisons of the Intel Atom against the Via Nano indicate that a single core Intel Atom is easily outperformed by the Via Nano which is in turn outperformed by a dual core Intel Atom 330 in tests where multithreading is used. The Core 2 Duo SU7300 outperforms the dual-core Nano. The Xcore86 (also known as the PMX 1000) is x586 based System on Chip (SoC) that offers a below average thermal envelope compared to the Atom. In 2014, Kenton Williston of EE Times said that while Atom will not displace ARM from its current markets, the ability to apply the PC architecture into smaller, cheaper and lower power form factors will open up new markets for Intel. In 2014, ARM claimed that Intel's Atom processors offer less compatibility and lower performance than their chips when running Android, and higher power consumption and less battery life for the same tasks under both Android and Windows. Issues In February 2017 Cisco Systems reported a clock signal issue that would disable several of its products. Cisco stated, "we expect product failures to increase over the years, beginning after the unit has been in operation for approximately 18 months". Soon after, The Register broke the news that this issue was linked to the Intel Atom SoC, and reports of other vendors being affected started appearing online. See also List of Intel Atom microprocessors Intel Edison Intel Quark Notes References linuxdevices.com - Intel announces first Atom chips hardwaresecrets.com - Inside Atom Architecture computermonger.com - Intel Atom N280 vs N270 Benchmarked LinuxTECH.NET - Intel Pineview Atom based Motherboards Complete Overview - FYI: Ticking time-bomb fault will brick Cisco gear after 18 months - Intel Atom SoC bricking more than Cisco products External links Intel - Intel Atom Processor Overview Intel - Intel Atom Processor Family Atom Atom Computer-related introductions in 2008
3771347
https://en.wikipedia.org/wiki/Software%20design%20description
Software design description
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work. Composition The SDD usually contains the following information: The data design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures. The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules. The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model. The procedural design describes structured programming concepts using graphical, tabular and textual notations. These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work. IEEE 1016 IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an SDD; it is "applicable to automated databases and design description languages but can be used for paper documents and other means of descriptions." The 2009 edition was a major revision to IEEE 1016-1998, elevating it from recommended practice to full standard. This revision was modeled after IEEE Std 1471-2000, Recommended Practice for Architectural Description of Software-intensive Systems, extending the concepts of view, viewpoint, stakeholder, and concern from architecture description to support documentation of high-level and detailed design and construction of software. [IEEE 1016, Introduction] Following the IEEE 1016 conceptual model, an SDD is organized into one or more design views. Each design view follows the conventions of its design viewpoint. IEEE 1016 defines the following design viewpoints for use: Context viewpoint Composition viewpoint Logical viewpoint Dependency viewpoint Information viewpoint Patterns use viewpoint Interface viewpoint Structure viewpoint Interaction viewpoint State dynamics viewpoint Algorithm viewpoint Resource viewpoint In addition, users of the standard are not limited to these viewpoints but may define their own. IEEE Status IEEE 1016-2009 is currently listed as 'Inactive - Reserved'. See also Game design document High-level design Low-level design References External links IEEE 1016 website Software design Software documentation IEEE standards
7849030
https://en.wikipedia.org/wiki/Insight%20Enterprises
Insight Enterprises
Insight Enterprises Inc. is an Arizona-based publicly traded global technology company that focuses on business-to-business and information technology (IT) capabilities for enterprises. Insight focuses on three, primary solution areas: cloud and data center transformation, connected workforce, and digital innovation. The company is listed on the Fortune 500 and has offices in 19 countries. History Early years Hard Drives International was founded in 1988 by Eric and Tim Crown. Initially a mail order business selling computer storage, the company expanded into a storefront when credit card companies wouldn't service startup mail order firms. In 1991, the company changed its name to Insight Enterprises, and distribution included a full line of computers and accessories. The company held its initial public offering in January 1995. Insight became an international company when it acquired TC Computers, based in Montreal, Canada, in 1997. In April 1998, Insight signed an agreement to acquire Choice Peripherals Ltd. and Plusnet Technologies Ltd., an Internet service provider and website hosting and development company operating as Force 9. The acquisition expanded Insight's operations to Europe. 2000-2010 Insight acquired Action Computer Supplies Holdings PLC, a U.K.-based direct marketer, in October 2001 for approximately $150 million in stock. In April 2002, the company acquired Comark for $150 million, increasing Insight's ability to work with clients of all sizes, including the public sector. In July 2006, Insight entered an agreement to purchase Software Spectrum, a company that focused on software and mobility products for medium and large companies, from Level 3 Communications for $287 million. Calence LLC, a technology company that focused on Cisco networking and advanced communications, was acquired by Insight in April 2008 for $125 million. Insight acquired U.K.-based Minx Ltd., a European network integrator with Cisco Gold Partner accreditation, in July 2008 for $1.5 million and the assumption of $4.6 million of existing debt. During these years, there have been a number of companies founded by Insight employees, such as ComputerSupport.com. 2011-present Ensynch, an information technology company founded in 2000, was acquired by Insight in September 2011. Insight acquired Inmac Gmbh, a business-to-business hardware reseller based in Germany, in February 2012. In June 2015, Insight underwent a corporate rebranding which aimed to emphasize the company focus on customer relations and interaction. The company launched a new website in July 2015 as part of the shift in company focus. In October 2015, Insight acquired BlueMetal, an interactive design and technology architecture firm based in Boston. Also in 2015, Insight raised $200,000 in its annual campaign for its Noble Cause division. Through Noble Cause, Insight gives to local charities including the Make-a-Wish Foundation, the Boys & Girls Clubs of America and the Ronald McDonald House. In 2016, BlueMetal partnered with INDYCAR, the Indianapolis Motor Speedway and Microsoft to produce an analytics focused app during the Indianapolis 500. Insight opened an additional sales center at the Meadows Office and Technological Park in Conway, Arkansas in August 2016. Insight announced the acquisition of Ignia, an Australian-based business technology consulting company, in September 2016. In November 2016, Insight Enterprises announced that they were acquiring the Eden Prairie, Minnesota-based data center services and solutions company, Datalink for $11.25 per share in cash, but that the company would remain mostly the same. The $258 million deal closed in the first quarter of 2017 and allowed Insight Enterprises to enhance its data center services. In September 2017, Insight announced that they acquired Dutch cloud service provider Caase.com. In August 2018 Insight announced the acquisition of Cardinal Solutions, a national provider of digital solutions across mobile, web, analytics and cloud. In August 2019, Insight Enterprises announced the acquisition of PCM, Inc., a global provider of IT products and services, expanding Insight’s operations in the US, Canada and the UK. Awards 2015 HP PartnerOne Financial Services Partner of the Year Insight makes Fortune 500 list in 2015 2016 Dell EMC Philanthropic Partner of the Year HP Personal Systems Reseller of the Year for the U.S. Channel Microsoft Australia Excellence in Industry and Platform Innovation Veritas Pacific Software Licensing Partner of the Year 2018 Achievers 50 Most Engaged Workplaces in North America Fortune 100 Best Workplaces for Diversity Microsoft Singapore Modern Workplace Transformation Partner of the Year, and Security and Compliance Partner of the Year Microsoft U.S. Partner Award for Apps and Infrastructure – DevOps Microsoft U.S. Surface Fastest Growing Reseller Microsoft Worldwide Artificial Intelligence Partner of the Year Microsoft Worldwide Modern Desktop Partner of the Year Phoenix Business Journal Corporate Philanthropy Volunteerism Finalist 2019 No. 23 on Fortune’s 50 Best Workplaces in Technology No. 430 on the Fortune 500, 11th year of placing Australian IoT Best Primary Industry IoT Project No. 70 on Fortune 100 Best Workplaces for Diversity References Companies based in Tempe, Arizona Software companies based in Arizona Multinational companies headquartered in the United States Software companies of the United States Software companies established in 1988 Companies listed on the Nasdaq 1995 initial public offerings
40712897
https://en.wikipedia.org/wiki/Dark%20web
Dark web
The dark web is the World Wide Web content that exists on darknets: overlay networks that use the Internet but require specific software, configurations, or authorization to access. Through the dark web, private computer networks can communicate and conduct business anonymously without divulging identifying information, such as a user's location. The dark web forms a small part of the deep web, the part of the Web not indexed by web search engines, although sometimes the term deep web is mistakenly used to refer specifically to the dark web. The darknets which constitute the dark web include small, friend-to-friend peer-to-peer networks, as well as large, popular networks such as Tor, Freenet, I2P, and Riffle operated by public organizations and individuals. Users of the dark web refer to the regular web as Clearnet due to its unencrypted nature. The Tor dark web or onionland uses the traffic anonymization technique of onion routing under the network's top-level domain suffix .onion. Terminology Definition The dark web has often been confused with the deep web, the parts of the web not indexed (searchable) by search engines. The term dark web first emerged in 2009; however, it is unknown when the actual dark web first emerged. Many internet users only use the surface web, data that can be accessed by a typical Google browser. The dark web forms a small part of the deep web, but requires custom software in order to access its content. This confusion dates back to at least 2009. Since then, especially in reporting on Silk Road, the two terms have often been conflated, despite recommendations that they should be distinguished. The dark web, also known as darknet websites, are accessible only through networks such as Tor ("The Onion Routing" project) that are created specifically for the dark web. Tor browser and Tor-accessible sites are widely used among the darknet users and can be identified by the domain ".onion". Tor browsers create encrypted entry points and pathways for the user, allowing their dark web searches and actions to be anonymous. Identities and locations of darknet users stay anonymous and cannot be tracked due to the layered encryption system. The darknet encryption technology routes users' data through a large number of intermediate servers, which protects the users' identity and guarantees anonymity. The transmitted information can be decrypted only by a subsequent node in the scheme, which leads to the exit node. The complicated system makes it almost impossible to reproduce the node path and decrypt the information layer by layer. Due to the high level of encryption, websites are not able to track geolocation and IP of their users, and users are not able to get this information about the host. Thus, communication between darknet users is highly encrypted allowing users to talk, blog, and share files confidentially. Content A December 2014 study by Gareth Owen from the University of Portsmouth found that the most commonly hosted type of content on Tor was child pornography, followed by black markets, while the individual sites with the highest traffic were dedicated to botnet operations (see attached metric). Many whistleblowing sites maintain a presence as well as political discussion forums. Sites associated with Bitcoin, fraud-related services, and mail order services are some of the most prolific. As of December 2020, the number of active Tor sites in .onion was estimated at 76,300 (containing a lot of copies). Of these, 18 000 would have original content. In July 2017, Roger Dingledine, one of the three founders of the Tor Project, said that Facebook is the biggest hidden service. The dark web comprises only 3% of the traffic in the Tor network. A February 2016 study from researchers at King's College London gives the following breakdown of content by an alternative category set, highlighting the illicit use of .onion services. Ransomware The dark web is also used in certain extortion-related processes. Indeed, it is common to observe data from ransomware attacks on several dark web sites (data sales sites, public data repository sites. Botnets Botnets are often structured with their command-and-control servers based on a censorship-resistant hidden service, creating a large amount of bot-related traffic. Darknet markets Commercial darknet markets mediate transactions for illegal goods and typically use Bitcoin as payment. These markets have attracted significant media coverage, starting with the popularity of Silk Road and Diabolus Market and its subsequent seizure by legal authorities. Silk Road was one of the first dark web marketplaces that emerged in 2011 and has allowed for the trading of weapons and identity fraud resources. These markets have no protection for its users and can be closed down at any time by authorities. Despite the closures of these marketplaces, others pop up in their place. As of 2020, there have been at least 38 active dark web market places. These marketplaces are similar to that of eBay or Craigslist where users can interact with sellers and leave reviews about marketplace products. Examination of price differences in dark web markets versus prices in real life or over the World Wide Web have been attempted as well as studies in the quality of goods received over the dark web. One such study was performed on Evolution, one of the most popular crypto-markets active from January 2013 to March 2015. Although it found the digital information, such as concealment methods and shipping country, "seems accurate", the study uncovered issues with the quality of illegal drugs sold in Evolution, stating that, "the illicit drugs purity is found to be different from the information indicated on their respective listings." Less is known about consumer motivations for accessing these marketplaces and factors associated with their use. Bitcoin services Bitcoin is one of the main cryptocurrencies used in dark web marketplaces due to the flexibility and relative anonymity of the currency. With Bitcoin, people can hide their intentions as well as their identity. A common approach was to use a digital currency exchanger service which converted Bitcoin into an online game currency (such as gold coins in World of Warcraft) that will later be converted back into fiat currency. Bitcoin services such as tumblers are often available on Tor, and some – such as Grams – offer darknet market integration. A research study undertaken by Jean-Loup Richet, a research fellow at ESSEC, and carried out with the United Nations Office on Drugs and Crime, highlighted new trends in the use of Bitcoin tumblers for money laundering purposes. Due to its relevance in the digital world, Bitcoin has become a popular product for users to scam companies with. Cybercriminal groups such as DDOS"4" have led to over 140 cyberattacks on companies since the emergence of Bitcoins in 2014. These attacks have led to the formation of other cybercriminal groups as well as Cyber Extortion. Hacking groups and services Many hackers sell their services either individually or as a part of groups. Such groups include xDedic, hackforum, Trojanforge, Mazafaka, dark0de and the TheRealDeal darknet market. Some have been known to track and extort apparent pedophiles. Cyber crimes and hacking services for financial institutions and banks have also been offered over the dark web. Attempts to monitor this activity have been made through various government and private organizations, and an examination of the tools used can be found in the Procedia Computer Science journal. Use of Internet-scale DNS distributed reflection denial of service (DRDoS) attacks have also been made through leveraging the dark web. There are many scam .onion sites also present which end up giving tools for download that are infected with trojan horses or backdoors. Financing and fraud Scott Dueweke the president and founder of Zebryx Consulting states that Russian electronic currency such as WebMoney and Perfect Money are behind the majority of the illegal actions. In April 2015, Flashpoint received a 5 million dollar investment to help their clients gather intelligence from the deep and dark web. There are numerous carding forums, PayPal and Bitcoin trading websites as well as fraud and counterfeiting services. Many such sites are scams themselves. Phishing via cloned websites and other scam sites are numerous, with darknet markets often advertised with fraudulent URLs. Illegal pornography The type of content that has the most popularity on the dark web is illegal pornography—more specifically, child pornography. About 80% of its web traffic is related to accessing child pornography despite it being difficult to find even on the dark web. A website called Lolita City, that has since been taken down, contained over 100 GB of child pornographic media and had about 15,000 members. There is regular law enforcement action against sites distributing child pornography – often via compromising the site and tracking users' IP addresses. In 2015, the FBI investigated and took down a website called Playpen. At the time, Playpen was the largest child pornography website on the dark web with over 200,000 members. Sites use complex systems of guides, forums and community regulation. Other content includes sexualised torture and killing of animals and revenge porn. In May 2021, German police said that they had dismantled one of the world's biggest child pornography networks on the dark web known as Boystown, the website had over 400,000 registered users. Four people had been detained in raids, including a man from Paraguay, on suspicion of running the network. Europol said several pedophile chat sites were also taken down in the German-led intelligence operation. Terrorism Terrorist organizations took to the internet as early as the 1990s; however, the birth of the dark web attracted these organizations due to the anonymity, lack of regulation, social interaction, and easy accessibility. These groups have been taking advantage of the chat platforms within the dark web to inspire terrorist attacks. Groups have even posted "How To" guides, teaching people how to become and hide their identity as terrorist. The dark web became a forum for terrorist propaganda, guiding information, and most importantly, funding. With the introduction of Bitcoin, an anonymous transactions were created which allowed for anonymous donations and funding. By accepting Bitcoin, terrorists were now able to fund money to purchase weaponry. In 2018, an individual named Ahmed Sarsur was charged for attempting to purchase explosives and hire snipers to aid Syrian terrorists, as well as attempting to provide them financial support, all through the dark web. There are at least some real and fraudulent websites claiming to be used by ISIL (ISIS), including a fake one seized in Operation Onymous. With the increase of technology, it has allowed cyber terrorists to flourish by attacking the weaknesses of the technology. In the wake of the November 2015 Paris attacks, an actual such site was hacked by an Anonymous-affiliated hacker group, GhostSec, and replaced with an advert for Prozac. The Rawti Shax Islamist group was found to be operating on the dark web at one time. Social media Within the dark web, there exists emerging social media platforms similar to those on the World Wide Web, this is known as the Dark Web Social Network (DWSN). The DWSN works a like a regular social networking site where members can have customizable pages, have friends, like posts, and blog in forums. Facebook and other traditional social media platforms have begun to make dark-web versions of their websites to address problems associated with the traditional platforms and to continue their service in all areas of the World Wide Web. Unlike Facebook, the privacy policy of the DWSN requires that members are to reveal absolutely no personal information and remain anonymous. Hoaxes and unverified content There are reports of crowdfunded assassinations and hitmen for hire; however, these are believed to be exclusively scams. The creator of Silk Road, Ross Ulbricht, was arrested by Homeland Security investigations (HSI) for his site and allegedly hiring a hitman to kill six people, although the charges were later dropped. There is an urban legend that one can find live murder on the dark web. The term "Red Room" has been coined based on the Japanese animation and urban legend of the same name; however, the evidence points toward all reported instances being hoaxes. On June 25, 2015, the indie game Sad Satan was reviewed by YouTubers Obscure Horror Corner which they claimed to have found via the dark web. Various inconsistencies in the channel's reporting cast doubt on the reported version of events. There are several websites which analyze and monitor the deep web and dark web for threat intelligence. Policing the dark web There have been arguments that the dark web promotes civil liberties, like "free speech, privacy, anonymity". Some prosecutors and government agencies are concerned that it is a haven for criminal activity. The deep and dark web are applications of integral internet features to provide privacy and anonymity. Policing involves targeting specific activities of the private web deemed illegal or subject to internet censorship. When investigating online suspects, police typically use the IP (Internet Protocol) address of the individual; however, due to Tor browsers creating anonymity, this becomes an impossible tactic. As a result, law enforcement has employed many other tactics in order to identify and arrest those engaging in illegal activity on the dark web. OSINT, or Open Source Intelligence, are data collection tools that legally collect information from public sources. OSINT tools can be dark web specific to help officers find bits of information that would lead them to gaining more knowledge about interactions going on in the dark web. In 2015 it was announced that Interpol now offers a dedicated dark web training program featuring technical information on Tor, cybersecurity and simulated darknet market takedowns. In October 2013 the UK's National Crime Agency and GCHQ announced the formation of a "Joint Operations Cell" to focus on cybercrime. In November 2015 this team would be tasked with tackling child exploitation on the dark web as well as other cybercrime. In March 2017 the Congressional Research Service released an extensive report on the dark web, noting the changing dynamic of how information is accessed and presented on it; characterized by the unknown, it is of increasing interest to researchers, law enforcement, and policymakers. In August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web on behalf of banks and retailers routinely share their findings with the FBI and with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust. Journalism Many journalists, alternative news organizations, educators, and researchers are influential in their writing and speaking of the darknet, and making its use clear to the general public. Media coverage typically reports on the dark web in two ways; detailing the power and freedom of speech the dark web allows people to express, or more commonly reaffirms the illegality and fear of its contents, such as computer hackers. Many headlines tie the dark web to child pornography with headlines such as, "N.J. man charged with surfing 'Dark Web' to collect nearly 3K images of child porn", along with other illegal activities where news outlets describe it as "a hub for black markets that sell or distribute drugs". Specialist Clearweb news sites such as DeepDotWeb and All Things Vice provide news coverage and practical information about dark web sites and services; however, DeepDotWeb was shut down by authorities in 2019. The Hidden Wiki and its mirrors and forks hold some of the largest directories of content at any given time. Traditional media and news channels such as ABC News have also featured articles examining the darknet. See also Deepnet Darknet market List of Tor onion services OneSwarm References External links Excuse Me, I Think Your Dark Web is Showing – A presentation at the March 2017 BSides Vancouver Security Conference on security practices on Tor's hidden services Attacks Landscape in the Dark Side of the Web
2576553
https://en.wikipedia.org/wiki/United%20Nations%20University%20International%20Institute%20for%20Software%20Technology
United Nations University International Institute for Software Technology
The United Nations University International Institute for Software Technology (UNU-IIST; ; Portuguese: Instituto Internacional para Tecnologia de Programação da Universidade das Nações Unidas) was a United Nations University Research Training Centre based in Macau, China. History In 1989, the Council of the United Nations University decided to establish in Macau the United Nations University International Institute for Software Technology (UNU-IIST) as a research and training centre of the University. UNU-IIST opened its door in 1992 with the blessing of the governments of Portugal, China and Macau, also the initial donors of the institute. As part of the United Nations, the institute was to address the pressing global problems of human survival, development and welfare by international co-operation, research and advanced training in software technology. Recent history during the 2000-2010 decade UNU-IIST positions as both a university and an organ of United Nations. For nearly a generation, it has been committed to providing vital research on critical software systems while mentoring exceptional academics from around the developing world. As the world of information and communication technology (ICT) has changed immensely over the decades, it has defined a new mission along with a four-year strategic plan in 2010. The plan calls for UNU-IIST to embark on a more dramatic and focused response to the new computing environment and its potential to serve the cause of sustainable development (SD). Eventually, UNU decided to evolve the former IIST into a new Institute on Computing and Society (ICS). Administration The current Director of UNU-CS is Dr. Jingbo Huang. Former directors Prof. Michael L. Best (2015-2018) Prof. Peter Haddawy (2010-2015) Prof. Mike Reed (2005-2010) Prof. Zhou Chaochen (1997-2002) Prof. Dines Bjorner (1992-1997) Center for Electronic Governance The Center for Electronic Governance at UNU-IIST is an International Center of Excellence on research and practice in Electronic Governance. Established in 2007, the center has been built upon the contribution of UNU-IIST to the eMacao Project (2004–2006) and eMacao Program (2007-now), a collaborative initiative to build and utilize a foundation for Electronic Government in Macao SAR. Since 2010, it has become an official programme of UNU-IIST. The mission of the Center for Electronic Governance at UNU-IIST is to support governments in developing countries in strategic use of technology to transform the working of public organizations and their relationships with citizens, businesses, civil society, and with one another. Activities at the Center include applied and policy research, capacity building and various forms of development – strategy development, software development, institutional development and development of communities of practice. Teaching The Center regularly organizes and conducts schools, seminars, lectures and presentations for government leaders, managers, researchers, educators, etc. on various aspects of Electronic Governance. Various courses and presentation materials from these events are available. Conferences The center established in 2007 and since then leads the organization of a series of International Conferences on Electronic Governance (ICEGOV), with the first four editions in Macao, Cairo, Bogota, and Beijing. See also International Conference on Theory and Practice of Electronic Governance Prof. He Jifeng, former Senior Research Fellow References External links UNU-IIST Website United Nations University (UNU) United Nations University Vice-Rectorate in Europe (UNU-ViE) United Nations University Office in Paris (UNU-OP) United Nations University Office in New York (UNU-ONY) Center for Electronic Governance at UNU-IIST International Conference on Theory and Practice of Electronic Governance Center for Electronic Governance: Publications Projects: Developing Electronic Governance in Afghanistan - Assessment, Strategy, Implementation - EGOV.AF Developing Electronic Governance in Cameroon - Assessment, Strategy, Implementation - EGOV.CM Evaluation of Electronic Government Training by EU-China Information Society Project Government Enterprise Architecture Framework Government Information Sharing Knowledge Management for Electronic Governance Capacity Building for Electronic Governance IT Leadership and Coordination Research: Development Models and Frameworks Assessment, Evaluation and Measurement Strategies, Architectures and Alignment Software Infrastructure and Services Knowledge Management and Leadership International Comparative Studies Educational institutions established in 1992 Computer science departments Formal methods organizations International research institutes Research institutes in China Computer science institutes United Nations organizations based in Asia Universities in Macau International Institute for Software Technology 1992 establishments in Macau
26890942
https://en.wikipedia.org/wiki/Highway%20systems%20by%20country
Highway systems by country
This article describes the highway systems available in selected countries. Albania In Albania, major cities are linked with either new single/dual carriageways or well maintained State Roads marked as "SH" (Rrugë Shtetërore). There is a dual carriageway connecting the port city of Durrës with Tirana, Vlorë, and partially Kukës. There are three official Motorway segments in Albania marked with an "A" (Autostradë): Thumanë–Milot–Rrëshen–Kalimash (A1), Levan–Vlorë (A2), and partly Tirane–Elbasan (A3). Most rural segments continue to remain in bad conditions as their reconstruction has only begun in the late 2000s by the Albanian Development Fund . Algeria About 1390 km of highways in Algeria are in service and another 1500 km are under construction. Australia In Australia, a highway is a distinct type of road from freeways, expressways, and motorways. The word highway is generally used to mean major roads connecting large cities, towns and different parts of metropolitan areas. Metropolitan highways often have traffic lights at intersections, and rural highways usually have only one lane in each direction. The words freeway, expressway or motorway are generally reserved for the most arterial routes, usually with grade-separated intersections and usually significantly straightened and widened to a minimum of four lanes. The term motorway is used in some Australian cities to refer to freeways that have been allocated a metropolitan route number. Roads may be part-highway and part-freeway until they are fully upgraded. The Cahill expressway is the only "named" expressway in New South Wales, which opened in 1954, the first in the region. Austria In contrast to Germany, according to a 2002 amendment of the Austrian federal road act, Bundesstraßen is the official term referring only to autobahns (Bundesstraßen A) and limited-access roads (Schnellstraßen, Bundesstraßen S). The administration of all other former federal highways (Bundesstraßen B) has passed to the federal states (Bundesländer). Therefore, officially classified as Landesstraßen, they are still colloquially called Bundesstraßen and have retained their 'B' designation (except for Vorarlberg), followed by the number and a name. They are marked by a blue number sign. Belgium Belgium has the second highest density highway network in Europe after the Netherlands, at 54.7 km per 1000 km². Most Belgian highways have three lanes with a few exceptions like the ring roads around Brussels and Antwerp which have five or six lanes in some stretches. Belgium is situated at a crossroads of several countries, and its highways are used by many nationalities. Belgian highways are indicated by the letter "A" and a European number, with E numbers being used most often. Roads that are (part of) a ring road around a city or a town are usually indicated by an R number. Many of the highways in Belgium are illuminated at night, since there is a surplus of nuclear-powered electricity during off-peak hours. Bosnia and Herzegovina As for Bosnia and Herzegovina, the Pan-European Corridor Vc Motorway, Budapest–Osijek–Sarajevo–Ploče, is one of the most significant and project of the highest priority; in Bosnia and Herzegovina it coincides with A1 Motorway. The construction works on the road have already begun, but intensified beginning of the construction will be a key starter of economic and social activities, and will enable Bosnia and Herzegovina to be connected to main European traffic network, as well as to global European economic and social structure. Construction of the motorway, whose total length is 340 km, will provide: rational connecting to neighboring countries and regions; stabilizing and developing effects will be reached; transport conditions and quality of life improvement; economy competitiveness enhancement; new projects launched and national and international private investments enhancement. Botswana Brazil In Brazil, highways (or expressway/freeway) are named "rodovia", and Brazilian highways are divided in two types: regional highways (generally of less importance and entirely inside of one state) and national highways (of major importance to the country). In Brazil, rodovia is the name given exclusively to roads connecting two or more cities with a sizable distance separating the extremes of the highway. Urban highways for commuting are uncommon in Brazil, and when they are present, they receive different names, depending on the region (Avenida, Marginal, Linha, Via, Eixo, etc.). Very rarely names other than "rodovia" are used. Regional highways are named YY-XXX, where YY is the abbreviation of the state where the highway is running in and XXX is a number (e.g. SP-280; where SP means that the highway is running entirely in the state of São Paulo). National highways are named BR-XXX. National highways connects multiples states altogether, are of major importance to the national economy and/or connects Brazil to another country. The meaning of the numbers are: 001–100 – it means that the highway runs radially from Brasília. It is an exception to the cases below. 101–200 – it means that the highway runs in a south–north way. 201–300 – it means that the highway runs in a west–east way 301–400 – it means that the highway runs in a diagonal way (northwest–southeast, for example) 400–499 – another exception, they are less important highways and its function is to connect a city to an arterial highway nearby Often, Brazilian highways receive names (famous people, etc.) on top of their YY/BR-XXX designation (example: SP-280 is also known as Rodovia Castelo Branco). Bulgaria The strategic location of the country on the Balkan Peninsula is decisive for the fact that 4 out of 10 land Pan-European corridors run through it, and 10 European routes - 6 A-class and 4 B-class routes. Highways in Bulgaria are dual carriageways, grade separated with controlled-access, designed for high speeds. In 2012 legislation amendments defined two types of highways: motorways and expressways. The main differences are that motorways have emergency lanes and the maximum allowed speed limit is 140 km/h, while expressways don't and the speed limit is 120 km/h. As of December 2018, of motorways are in service, with another being under various stages of construction. More than of motorways are planned. Also, several expressways are planned. Canada In Canada, there is no national standard for nomenclature, although in non-technical contexts highway appears to be most popular in most areas. The general speed limits on most Canadian highways range between 80 km/h (50 mph) and 110 km/h (70 mph) on two-lane highways rural and urban highways and between 80 km/h (50 mph) and 120 km/h (75 mph) on multi-lane, divided highways. Prairie Provinces are known for having higher speed limits than Central Canada and the Maritimes because of the flat geography and more car dominant way of life, however British Columbia remains the only province in Canada to have a speed limit of 120 km/h on the Coquihalla Freeway. Canada is the second largest country in the world in terms of land area, though it only has of paved roads. This is far less highway and road distance than the United States, which is smaller, but has more than 6,000,000 kilometres of paved roads and highways. However, Canada still has many more roads and highways than Russia, the largest country in the world in land area, with an estimated just 336,000 kilometres (208,000 miles) of paved roads. The most extensive freeway network in Canada is in the well-populated southeastern Canada, linking southern Ontario, southern Quebec, Nova Scotia, New Brunswick, and the United States. This makes the freeway network there very well-travelled, requiring these routes to be well-maintained to overcome the frequently harsh winter weather, wide enough to accommodate the high traffic volumes that they carry in large metropolitan areas (such as around Toronto, Montreal, Ottawa, and Detroit) in order to reduce the economic problems and frustrations that result from heavy traffic congestion, and also be safe enough to reduce the number of vehicle accidents. Ontario has some of the busiest freeways in North America. It has all public roads legally defined as highways, though provincially managed roads are known legally as Provincial Highways. In day-to-day usage, the term highway is used for provincial routes or freeways. It is also common for surface routes to be referred to by number (e.g. "Take Highway 10 from Mississauga to Owen Sound"), especially by older generations. The words freeway or expressway are sometimes used to refer to controlled-access, high-speed, grade-separated highways such as the 400-series highways, the Gardiner Expressway, the Don Valley Parkway, the Conestoga Parkway, or the E.C. Row Expressway. The only highway officially labelled as a freeway is the Macdonald–Cartier Freeway, usually known as Highway 401, or simply "the 401", which is North America's busiest freeway, as well as one of the widest in the world at 18 through lanes in the section passing through Toronto. The Queen Elizabeth Way was the first intercity divided highway in North America. Nearly all highways in Ontario use parclo interchanges, which were developed by the province. Parclos are used to avoid weaving and to maximize efficiency and safety. In Quebec, major highways are called autoroutes in French, and expressways or autoroutes in English. Nova Scotia numbers its highways by the trunk routes they parallel. For example, Highway 107 parallels Trunk 7. This, to a lesser extent, also applies in Ontario (e.g. Highway 410 and Highway 420 parallel Highway 10 and Highway 20.) Nova Scotia also numbers its highways according to usage: main arterial highways are in the 100s, secondary or old arterial highways are numbered in the double digits from 1 to 28, and collector roads are numbered in the triple digits starting at 200. The Trans-Canada Highway (or Trans-Canada) is a highway that crosses all of Canada from east to west and enters all ten provinces. The Trans-Canada ranges from a two-lane highway as it runs through the mountains of British Columbia with occasional divided highway status as the Province commits to twinning the road, a full divided highway with some sections qualifying as freeway status throughout Alberta and Saskatchewan, a mixture of both throughout Manitoba, a two-lane at-grade highway again as it passes through the sparsely populated areas of northern Ontario, and a multi-lane freeway as it travels through southern Ontario, southern Quebec, New Brunswick, and Nova Scotia. There are three or more ferry routes along the Trans-Canada, which allows it to connect to Newfoundland, Prince Edward Island, Haida Gwaii, and Vancouver Island. The Confederation Bridge provides an alternative route from New Brunswick to Prince Edward Island. Since the Trans-Canada Highway is not yet a divided, multi-lane freeway for its entire length, the section that crosses the western provinces and northern Ontario is considered to be more of an equivalent to the U.S. Route highway network in the neighboring United States. southern Ontario's 400-series freeways, Quebec's autoroutes, New Brunswick's portion of the Trans-Canada, Nova Scotia's 100-series highways, Alberta's Ring Road system, and Saskatchewan's Ring Road system are provincial equivalents to the American Interstate Highway System. The Canadian freeways interconnect with each other across provincial lines, and also with the American Interstate system. For example, freeways in Québec connect Montreal with the American border, and thence Interstate 87 continues from there to New York City, and likewise, Toronto is connected to the border by Ontario freeways, and thence by Interstate 190 to Buffalo, New York. Chile Chile has a large highway coverage which connects the whole country with the exception of the Magallanes region. China, People's Republic of "Highways" in China, more often than not, refer to China National Highways. The fully controlled-access, multi-lane, divided routes are instead called expressways. , there were 5.98 million km of highways and 131,000 km of expressways in China; both total lengths are the longest in the world. In Mainland China, private companies reimbursed through tolls are the primary means of creating and financing the National Trunk Highway System (NTHS). Expressways are lumped with first-grade G-prefixed guódào (国道, or "national highway") or A-prefixed first-grade expressways in major municipal cities. All roads in the NTHS and most A-prefixed roads are expressways. M-prefix: National (Trunk) Expressways (planned) G-prefix: National highways (typically expressways) A-prefix: Municipal highways (typically expressways) S-prefix: Provincial highways X-prefix: County highways Y-prefix: Rural roads Z-prefix: Special use roads (e.g., airport expressways) Some highways are numbered with a leading zero (e.g. G030). The term Freeway during the 1990s was used on a few expressways (such as the Jingshi Freeway). The term freeway has since been replaced with expressway on all signs in China. The Chinese name for expressways is 高速公路; in pinyin, it is gāosù gōnglù, which literally means "high speed public road". Signs on the National Highways (G-prefix) are green, while on the lower-grade highways and urban expressways (A-prefix) are blue. Hong Kong In Hong Kong, the type of high speed roads is referred to as expressway, but some are named as highways or roads ('Yuen Long Highway', 'Tolo Highway', 'Tsuen Wan Road', 'Tuen Mun Road', etc.). Some others are named corridors and bypasses. Colombia In Colombia are managed by the Colombian Ministry of Transport through the National Institute of Roads. Colombia's road infrastructure is still very underdeveloped with most of the highways presenting a two lane road for outbound and inbound traffic. Some exceptions are the Autopista Norte, linking Bogota and the towns of Tunja and Sogamoso and the Highways of the Valle del Cauca, an infrastructure improvement project started about a decade ago which has not yet been entirely finished. Several dual-carriage ways also link cities like Medellin, Pereira, Manizales and Armenia. Nowadays, the direct public funding on highways is increasing, focused mostly in connecting Colombia's agricultural and industrial heartland with its Caribbean and Pacific ports through twinning existing roads and the construction of 5,892 km of roads. The most important projects under negotiation or construction are La Ruta del Sol (the Sun Road), a 4-lane highway between Bogota and the Caribbean coast; and the Highway between Bogota and Buenaventura (Colombia's largest and busiest port) which includes a 9 km tunnel. Croatia Croatia has 11 highways and 13 expressways. The earliest highway in Croatia was built in 1971. The word highway is a common Croatian translation of the term autocesta, which describes a toll highway similar to a freeway or an Autobahn. Czechia Czechia has 17 motorways. The construction of the earliest Czech highway (D1) between Prague and Brno was initiated in 1939, but was twice interrupted and reached Brno only in 1980. The word highway is a common Czech translation of the term dálnice, which describes a toll highway similar to a freeway or an Autobahn. Denmark With the completion of the extremely long highway bridge-tunnels of Great Belt Fixed Link in 1998 and Øresund Bridge in 2000, continental Europe was finally connected by road and rail with capital city Copenhagen and Sweden. This includes the Swedish highway and railroad system. The bridge-tunnels are all interconnected with major Danish highways and completes a continuous international road connection from northern Sweden to Gibraltar at the southern edge of Spain and Messina, Italy, at the southern tip of the Italian "boot". The 18 kilometre Fehmarn Belt Fixed Link has commenced construction in 2017, planning to link Zealand (with Copenhagen) to northern Germany by 2028. Finland The national highways in Finland are numbered 1-29 and are in total 9,000 km long. This number system originates from 1938. There are motorways for 881 kilometres around the largest cities, specially down south near the capital Helsinki. Highways numbered 1-6 are the main connection roads in Finland. France France has a national highway system dating back to Louis XV (see Corps of Ponts et Chaussées). The chaussées constructed at this time, radiating out from Paris, form the basis for the "routes nationales" (RN), whose red numbers differ from the yellow numbering used for secondary "routes départementales". The RNs numbered from 1 to 20 radiate from Paris to major ports or border crossings. More recently, after the Second World War, France has constructed Autoroutes, ex. A6-A7 which is called "Autoroute du Soleil", superhighways (usually toll) with a speed limit of 130 km/h (110 in rainy conditions or urban areas). Those autoroutes made some parts of the "routes nationales" (RN) to become secondary ("routes départementales"). Germany Aside from highways bearing the Autobahn designation, Germany has many two- and four-lane roads. Federal highways not known as autobahnen are called Bundesstraßen (Bundesstrassen) and, while usually two-lane roads, they may also be four-lane, limited-access expressways of local or regional importance. Unlike the Autobahnen, though, Bundesstraßen (marked by black numbers on a yellow background) mostly have speed limits (usually 100 km/h, but occasionally higher on limited-access segments, and lower in urban areas or near intersections). Greece Hungary Hungary has 7 major motorways ("autópálya"): M0 is a quasi-circular highway for the traffic bypassing Budapest. It is divided in 4 sectors: Southern (links motorways M1, M7, M6 and M5), South-eastern (links Motorway M5 and Main Road nr. 4), Eastern (links Main Road nr. 4 and Motorway M3), Northern (links Main Road nr. 2 with the Megyeri Bridge) and Western (to be finished in 2015; will link main roads 11, 11 and Motorway M1). The total length will be around 100 km. M1: links Budapest and the north-western border with Austria (Hegyeshalom), then continues its way toward Vienna. The total length is around 170 km. M3: links Budapest and the north-eastern city of Miskolc (M30 branch), eastern cities of Nyíregyháza (M3) and Debrecen (M35 branch). Provides links toward Slovakia, Ukraine and Romania. It has a total length of around 250 km. M5: links Budapest and the southern city of Szeged, then the Serbian border (Röszke). It provides a connection to Southern Europe by route E75 and also links to route 68 in Romania. M5 motorway has a length of around 140 km. M7: links Budapest and the southern shore of Lake Balaton, then continues its way toward Croatia and Slovenia. Its length is about 230 km. M6: links Budapest and Dunaújváros, now extended to the southern city of Pécs. The original length was around 60 km. Also, there are other smaller motorway sections that will be linked to the national motorway network in the future. See here an animation of Hungarian motorway developments (past, present and future): "Térkép animáció". Motorways usually have 2 traffic lanes and an emergency lane on each direction, divided by a green zone and metallic rail. The speed limit is 130 km/h. Expressways usually have no dividing lane in the middle, but sometimes have a metallic rail. The number of lanes is one per direction, with sections of 1+2 lanes (for easier overtaking). The speed limit is 110 km/h. Motorways and expressways cannot be used by vehicles that are not able to reach 60 km/h. There is a toll on all motorways, except M0. Trucks and buses have a separate toll system. () Those who wish to travel on these roads have to buy a sticker. Controversially, there is no option to buy a one-day or one-time pass for passenger cars. Main roads usually have one lane per direction, no dividing rail. The speed limit is 110 km/h. County roads have less traffic than main roads, the speed limit is 90 km/h. India National Highway (India) (), In India, 'Highway' refers to one of the many National Highways and State Highways that run up to a total length of over 300,000 km consisting mostly of two-lane paved roads, changing into higher lanes mostly around cities. National Highways are designated as NH followed by the number. As of 2009, the major cities in India – Ahmedabad, Pune, Jaipur, Bengaluru, Hyderabad, Nagpur, Visakhapatnam, Mumbai, Chennai, Kolkata, and Delhi – are connected by the Golden Quadrilateral or North-South and East-West Corridor, that consists of 4 to 6 laned roads. Other major cities are connected to it by the National Highways. An expressway refers to any access controlled road with grade-separated intersections and make up a very small portion of India's highway network, at about 1454.4 km in length. Expressways are separate from the highway network, except for the Delhi-Gurgaon Expressway, which is part of NH 8. Agra-Lucknow Expressway is the longest expressway more than 300 km in India after its inauguration on 21 November 2016 replacing Yamuna Expressway 165 km, cost ₹150 billion (US$2.2 billion). Indonesia The Indonesian national route system exists solely on Java. The tolled expressways built parallel to the national route, for example, the Jakarta-Merak Toll Road that parallels National Highway 1 from Merak Harbour to Jakarta. Urban expressways are also build, for example Jakarta's Inner and Outer ring roads. The main cities in Java has also connected well by the toll road network, including Cipularang Toll Road in West Java connecting Jakarta and Bandung, and the Trans-Java toll road that connects Jakarta and Surabaya. It's also connects to Cirebon (Cikopo–Palimanan Toll Road), Semarang (Batang-Semarang Toll Road), Surakarta (Semarang-Solo Toll Road) and several towns on both Central and East Java. Especially the North Coast in Central Java. There's a plan to connect Lampung to Aceh in Sumatra Island by the Trans-Sumatra Toll Road network. Also, Balikpapan and Samarinda in East Kalimantan has been connected by the toll road in 2019. Ireland The Republic of Ireland has the 6th densest motorway network in Europe. 'N' road or 'R' road. The maximum speed is 120 km/h on motorways. The main Inter Urban route Motorways connecting Dublin by motorway to the cities of Cork, Limerick, Waterford and Galway, as well as other projects, has increased the total motorway network in the state to approximately 1,017 kilometres (632 mi). Iran In Iran the term highway ~ commonly known as autobahn (in Persian: اتوبان/بزرگراه), is applied to roads that are constructed by particular standards. The conventional speed limit of 120 km/h is in place for most highways. There are two types of highway in Iran: Inner cities highways: They can be found in larger cities such as Tehran. The main purpose of them is to prevent congestion and pass traffics through cities. Between cities highway: They connect different parts of the country together. Iraq Israel Italy In Italy the term highway can be applied to superstrada (can be translated as expressway and it is toll free) and autostrada (Italian term for motorway: the most part of the system is mandatory toll). Italy was the first country in the world to build such roads, the first one being the "Autostrada dei Laghi" (Autostrada of the Lakes), from Milan to Varese, built in 1921 and finished in 1924. This system of early motorways was extended in the early 1930s till the early 1970s. Nowadays the Autostrade is a comprehensive system of about 6.500 km of modern motorways where the maximum speed limit is 130 km/h. Japan The expressways, or kōsokudōro (high speed roads), of Japan are made of a huge network of freeway-standard toll roads. Once government-owned, they have been a turned over to private companies. Most expressways are four lanes with a central reservation, or median. The speed limits, with certain regulations and great flexibility, usually include a maximum speed of 100 km/h, and a minimum speed of 50 km/h. Lithuania Malaysia The highest level of major roads in Malaysia, expressway (lebuhraya), has full access control, grade separated junctions, and mostly tolled. The expressways link the major state capitals in Peninsular Malaysia and major cities in Klang Valley. Highway is lower level with limited access control, some at-grade junctions or roundabouts, and generally with 2 lanes in each separated direction. These are generally untolled and funded by the federal government, hence the first one is called Federal Highway linking Klang and Kuala Lumpur. The trunk roads linking major cities and towns in the country are called federal trunk roads, and are generally 2 lanes single carriageway roads, in places with a third climbing lane for slow lorries. Mexico Morocco New Zealand In New Zealand, both motorway and an expressway have at least two lanes of traffic in either direction separated by a median, with no access to adjacent properties. The distinction depends on the type of traffic allowed to use the route. Non-vehicular traffic and farm equipment are prohibited from motorways, while pedestrians, cyclists, tractors, and farm animals are legally entitled to use expressways such as the Waikato Expressway south of the Bombay Hills and the Tauranga expressway system, although this is rare. New Zealand's main routes are designated state highways as they are funded by the central government. State Highway 1 is the only route to run through both the North and South Islands, and runs (in order north–south) from Cape Reinga to Wellington in the North Island, and from Picton to Bluff in the South Island. State Highways 2–5 are main routes in the North Island, State Highways 6–9 in the South Island, and state highways numbered from 10 onwards are generally found in numerical order from north to south. State highways usually incorporate different standards of roads, for example, State Highway 1 from Auckland to Hamilton incorporates the Northern and Southern Motorways in the Auckland area, the Waikato Expressway, and a rural road before passing through the streets of Hamilton. The term freeway is rarely used relating to New Zealand roads. Netherlands The Autosnelweg system is in constant development. Most of its parts are owned and funded by the government, but in recent times Public-private partnership come more and more into practice, such as in a part of the A59 between Oss and 's-Hertogenbosch. The Netherlands has the highest density highway network of Europe at 56.5 km per 1000 km², followed by Belgium. The 'Autosnelwegen', the main corridors, are designated with an A while secondary connecting roads have an N number. Sections of the A network are also part of the International E-road network in connecting with neighboring Belgium, Germany and England, the latter by ferry. The speed limit is 130 km/h, unless noted otherwise, and 120 km/h, 100 km/h during the day, or 80 km/h on various locations. This is done to reduce exhaust emissions and to limit noise to surrounding residential areas. North Macedonia Total length of the Macedonian motorways as of Spring 2021 is 317 km, with another 70 km being under construction (57 km from Kicevo to Ohrid and 13 km from Skopje Ring Road to the border with Kosovo). Additional 60 km are planned to start with construction in 2022. The three motorway sections are A1 (part of E-75), which connects the northern (Serbia) to the southern border (Greece), A2 (part of E-65) connects Skopje, Tetovo and Gostivar (Kicevo and Ohrid by 2023). A3 connects Skopje to the eastern town of Stip. Norway Norway has a national highway system, numbered 2-899. Some main highways are also European highways and have an E before the number. The highways are often relatively narrow and curvy. Near the larger cities, especially around Oslo and Trondheim, there are motorways. Norway has also been engaged in recent decades in boring some extremely long highway tunnels through the mountain ranges, and some of these, now the world's longest, are so long that they have hollowed-out caverns in the midst of them for motorists to stop and take rests. Pakistan Pakistan has its own network of highways and motorways. Motorways extending from M1 to M10 will eventually connect whole length of the country from Peshawar to Karachi. The M2, the first motorway, was built in 1997 with the contract being awarded to the Korean firm Daewoo. It linked the federal capital Islamabad with Punjab's provincial capital Lahore. The network was then extended to Faisalabad and then to Multan with the M4. M1 highway to the Khyber Pakhtunkhwa's capital Peshawar had been completed in October 2007. M4, M5, M6, and M7 have been planned and also being built by local and foreign firms. This will connect Faisalababd, Multan, Dera Ghazi Khan, Rotadero (Larkana) to Karachi. N5 links Karachi to other cities. Entry on all Pakistan highways is restricted to fast moving wheelers only. Slow-moving traffic and two wheelers (such as motorcycles and bicycles) are not allowed and construction and agricultural machinery is also restricted. M9 and M10 are also functional now that connect Karachi to Hayderabad. The LSM (Lahore Sialkot Motorway) which is 103 km is under construction and will be completed by 2010. Expressways are similar to motorways with lesser access restrictions and are owned, maintained and operated either federally or provincially. Pakistan's Motorways are patrolled by Pakistan's National Highways & Motorway Police (NH&MP), which is responsible for enforcement of traffic and safety laws, security and recovery on the Pakistan Motorway network. The NH&MP use SUVs, cars and heavy motorbikes for patrolling purposes and uses speed cameras for enforcing speed limits. Philippines Portugal Poland Polish public roads are grouped into categories related to administrative division. Poland has of public roads, of which are unpaved (2008): National roads: Voivodeship roads: , unpaved Powiat roads: , unpaved Gmina roads: , unpaved Polish motorways and expressways are part of national roads network. Romania Romania currently has eight operational highways, summing up to 943 km; They are now being extended and additionally, other motorways are planned to be built by 2030. A0: Bucharest Ring Motorway: 75 of 100 km under construction (A2 - Jilava - DN6 - A1, DN1:Corbeanca - DN2:Afumați, DN3:Cernica - A2); estimated completion in 2024; A1: Bucharest-Nadlac highway: 443.7 of 580.2 km built (Bucharest-Pitești, Pitesti Bypass, Sibiu Bypass, Sibiu-Coşevița, Margina-Nădlac); 44.5 km under construction (Pitești - Curtea de Argeș, Boița - Sibiu); estimated completion in 2026-2027 A2: Autostrada Soarelui (Highway of the Sun): all 202,7 km built (Bucharest-Cernavodă-Constanța); A3: Autostrada Transilvania (Transylvania highway): 171 of 596 km built (București - Ploiești, Râșnov - Cristian, Târgu Mureș - Chețani, Câmpia Turzii - Nădășelu, Oradea North / Biharia - Borș); approx. 85 km under construction (Nădășelu - Poarta Sălajului, Nușfalău - Suplacu de Barcău, Biharia - Chiribiș); estimated completion unknown A4: Autostrada Constanței: all 21,8 km built (Constanța Bypass); A6: Lugoj-Calafat highway: 10,4 of approx. 260 km built (Balinț-Lugoj); estimated completion unknown; A7: Autostrada Moldova (Moldova highway): 16.2 of approx. 450 km built (Bacău Bypass); A8: East-West highway: 300 km planned for completion unknown; A10: Sebeș-Turda highway: all 70 km built A11: Arad - Oradea highway: 3 of approx. 135 km built (Arad Bypass); 19 km under construction (Oradea bypass); estimated completion unknown DX6: Galați–Brăila Expressway: 12.29 km under construction; estimated completion in 2023 DX12: Pitești–Craiova Expressway: 121.18 km under construction; estimated completion in 2023 There are no tolls for using the motorways in Romania, except Cernavodă Bridge over the Danube on the A2. Nevertheless, every Romanian car that uses a motorway or a national road in Romania must pay a toll, specifically a vignette. A few years ago the vignette was ported to an electronic format, thus eliminating the need of a physical display (sticker). Russia Russia has many highways, but only small number of them are currently motorways. Examples of Russian motorways are Moscow and Saint Petersburg Ring Roads. Highways and motorways are free in Russia and only two motorways, Western High Speed Diameter and Moscow-Saint Petersburg toll motorway, currently under construction, will be first Russian toll motorways. Russians themselves often translate the Russian name for highway (Автомобильные дороги=automobile roads) into motorway in English, which is not a correct English name, it should be highway. Saudi Arabia Saudi Arabia has a total highway length of 73,000 km. Highways in Saudi Arabia vary from ten laned roads to small four laned roads. The city highways and other major highways are well maintained such as the roads in Riyadh. The roads are constructed so they resist the summer's extremely high heat and do not reflect the strong sun. The outer city highways such as the one linking from coast to coast are not as great as the inner-city Riyadh highway is fasts highways in KSA but the government is now working on rebuilding those roads. Some of the important inter-city highways include: Dammam - Khafji Highway (457 120 km/h) Jeddah - Makkah Highway (75 120 km/h) Makkah - Madinah Al Munawarah Highway (421 120 km/h) Riyadh - Gomfida Highway (395 140 km) Riyadh - Qasim Highway (317 140 km) Riyadh - Taif Highway (950 140 km) Taif - Abha Highway (950 110 km) Serbia The highways in Serbia are classified as IA state roads and common name for highway is auto-put which functions based on toll pay system and controlled access. Serbia currently has 876 km of total length of highways (and total length of public roads is 40,845 km), while 1,154 km are planned. Because of its geographical position, it is very important for transit of capital, goods and services through Europe and Balkans, especially. As well it is one of most important countries on Balkans for Pan-european corridors (E65, E70, E75, E80, E661, E662, E761, E763, E771, E851). The signs on Serbian highways are green colored and speed limit is 130 km/h. The history of Serbian highways starts with socialist Yugoslavia, when increased production influenced on enlargement of transit on public roads. The first highway to be built was brotherhood and unity highway that encompassed Slovenia, Croatia, Serbia and Macedonia. That highway was part of the pan-european corridor X and was built around 1970-ties. Singapore The expressways of Singapore are all dual carriageways with grade-separated access. They usually have three lanes in each direction, although there are two- or five-lane carriageways in some places. There are nine expressways, with the newest one, the Marina Coastal Expressway which is constructed under modern technology under the water. Construction on the first expressway, the Pan Island Expressway, started in 1966. The other expressways were completed in stages, with the first phase of the Kallang-Paya Lebar Expressway being the most recently completed, in 2007. Today, there are 164 kilometres of expressways in Singapore. Slovakia The highways in Slovakia are divided into motorways (diaľnica) and expressways (rýchlostné cesty). The first modern highway in Slovakia should have been in the 30s planned motorway connecting Prague with northern parts of Slovakia; however the construction of the Slovak motorways was not started until 1970s. As of December 2018, of motorways and of expressways are in service, with another being under various stages of construction. Slovenia The highways in Slovenia are the central state roads in Slovenia and are divided into motorways (Slovene: avtocesta, AC) and expressways (hitra cesta, HC). Motorways are dual carriageways with a speed limit of 130 kilometres per hour (81 mph). They have white-on-green road signs as in Italy, Croatia and other countries nearby. Expressways are secondary highways, also dual carriageways, but often without the hard shoulder. They have a speed limit of 110 kilometres per hour (68 mph) and have white-on-blue road signs. South Africa Colloquially, the terms "freeway", "highway", and "motorway" are used synonymously. The term "expressway" is not common in South Africa. A freeway, highway or motorway refers to a divided dual carriageway with limited access, and at least two lanes in either direction. A central island, usually either with drainage, foliage, or high-impact barriers, provides a visible separation between the carriageways in opposite directions. As in the United Kingdom, Ireland, Australia, and Japan, South Africans drive on the left-hand side of the road and nearly all steering wheels are on the right-hand side of vehicles. Freeways are designated with one of three labels: N (in reference to national roads), R (short for "route", in reference to provincial roads), and M (in reference to metropolitan roads). This has more to do with the location of a road and its function than anything else. In addition, "N" roads usually run the length of the country over long distances, "R" roads usually inter-connect cities and towns within a province, and "M" roads carry heavy traffic in metropolitan areas. Route markings also determine who paid for the road: "N" was paid for by national government, "R" by provincial government, and "M" by local government. In recent years, some "R" roads have been re-designated as "N" roads, so that control and funding comes from the South African National Roads Agency. South Korea Expressways in South Korea were originally numbered in order of construction. Since August 24, 2001, they have been numbered in a scheme somewhat similar to that of the Interstate Highway System in the United States: Arterial routes are designated by two-digit route numbers, with north–south routes having odd numbers, and east–west routes having even numbers. Primary routes (i.e. major thoroughfares) have five and zero as their last digits respectively, while lesser (secondary) routes have various final digits. Branch routes have three-digit route numbers, where the first two digits match the route number of an arterial route. Belt lines have three-digit route numbers where the first digit matches the respective city's postal code. Route numbers in the range 70-99 are not used in South Korea and are reserved for designations in the event of Korean reunification. The Gyeongbu Expressway kept its Route 1 designation, as it is South Korea's first and most important expressway. Spain Spain's national highway system dates back to the era of King Carlos III. The roads built at this time, radiating from Madrid, form the basis for the carreteras nacionales radiales, numbered clockwise from I to VI, which radiate from Madrid to major ports or border crossings. In the 1960s Spain started to construct autopistas (toll highways) and autovías (freeways), and in 2016 had 17,109 km (10,631 mi) of highways, the biggest network in Europe and the fourth in the world, only after the USA, India, and China. Sri Lanka Southern Expressway (E01) is the first expressway in Sri Lanka. It travers from Kottawa (township in Suburban Colombo) to Matara (126 km) and the construction of the section from Kottawa to Pinnaduwa (Galle) was completed as a dual Expressway with 4-lane facility and declared open in November 2011. Galle Port access road has been built to connect Galle city to Pinnaduwa interchange. The design speed of this Expressway is 120 km/h. The operation speed of the Expressway is 100 km/h. The Southern Expressway will be extended up to Hambantota connecting Mattala Rajapaksa International Airport and the Magampura Mahinda Rajapaksa Port. The second expressway to be declared open in Sri Lanka was the Colombo - Katunayake Expressway (E03) that was opened for public from October 2013, which also connects Sri Lanka's premier international airport Bandaranaike International Airport with capital Colombo. Colombo Outer Circular Expressway (E02), which is currently under construction, is designed to link the major expressways connected to Sri Lanka's commercial hub, Colombo, bypassing the traffic within the city limits. Sweden The first freeway in Sweden was built between the cities of Malmö and Lund in the Skåne County in southern Sweden. The Swedish roads are divided in three classes; Motorväg, which is a 4-8 lane motorway with the speed limit of 110–120 km/h. Riksväg, which is a state highway with 2-4 lanes. The Riksväg has a speed limit of 70–100 km/h. The last road is the Länsväg, which is a "county route" with 2 lanes and 70–90 km/h in speed limit. The authority which is responsible for the roads in Sweden is Trafikverket. Switzerland The term Autobahn (German) / Autoroute (French) / Autostrada (Italian) is used for normal highways where there is a central physical structure separating two different directional carriageways. This is often translated into English as motorways. In express routes where there is no central physical structure separating two different directional carriageways, but crossings are still motorway-like otherwise, and traffic lights are not present, the road is instead called an Autostrasse / Semi-autoroute / Semi-autostrada, usually translated into English as an expressway. Those often have a lower speed limit than motorways. Taiwan The construction of Taiwan's national highways began in 1971 and the design is heavily based on the American Interstate Highway System. The Northern section between Keelung City and Zhongli City (now Zhongli District, Taoyuan) was completed in 1974. The construction of the first freeway (No. 1) was completed in 1978. The freeway runs from the northern port city cf Keelung to the southern port city of Kaohsiung. There was an 8.6 km branch (No. 1A) connecting the Taiwan Taoyuan International Airport. Construction on the other freeways began in the late 1980s. The north section of the second north–south freeway (No. 3) between Xizhi City and Hsinchu City was completed in 1997. The No. 1A Branch was extended to link No. 3 Freeway at Yingge, and renamed as No. 2 Freeway. Three other short freeways (No. 4, No. 8, and No. 10) were built to link the two north–south freeways in Taichung County (now part of Taichung City), Tainan County (now part of Tainan City), and Kaohsiung County (now part of Kaohsiung City), respectively. The entire No. 3 Freeway was completed in January, 2004. To ease the congestion of No. 1 Freeway in the Taipei metropolitan area, a 20 km elevated bridge was built in 1997 on top of the original freeway between Xizhi City and Wugu, to serve as a bypass for traffic not exiting/entering the freeway within the city limits of Taipei. The construction of a freeway connecting the Taipei metropolitan area and Yilan County began in 1991 and was completed in June 2006. It includes a 12.9 km tunnel (Hsuehshan Tunnel), which is the fifth longest road tunnel in the world. An extension from Yilan County to Hualien County is planned. However, its construction is being delayed due to environmental concerns. Thailand The motorways (Thai: ทางหลวงพิเศษ, RTGS: thang luang phiset) in Thailand is an intercity toll controlled-access highways network that currently spans 145 kilometres (90 mi). It is to be greatly extended to 4,154.7 kilometres (2,581.6 mi) according to the master plan. Thailand's motorway network is considered to be separate from Thailand's expressway network, which is the system of expressways, usually elevated, within Greater Bangkok. Thailand also has a provincial highway network. Turkey Turkey's main highway is E80 (former E5) runs from Edirne to the capital Ankara.Turkey's highways now run non-stop between Edirne to Şanlıurfa United Kingdom In the United Kingdom, the terms used for vehicular highways other than motorways include main road, trunk road, 'A' road / 'B' road, 'C' road, and unclassified road; they may additionally, where appropriate, be described as dual carriageways. However, in the law of England and Wales the term public highway includes all public rights of way regardless of the kind or amount of traffic they allow, including streets and public footpaths for pedestrians. The term also includes bridleways, which are for pedestrians, equestrians, and cyclists, as well as by-ways open to all traffic (for all of those users, plus vehicular traffic). In England and Wales, the public is said to have a "right of way" over a highway. This means that, subject to statutory restrictions, the route (or "way") must be kept clear to allow travel by anyone who wishes to it. At common law, it is unlawful to obstruct a highway or to interfere with its lawful use. However, many statutory provisions provide powers to do so (for instance. to carry out roadwork). Many public highways in the UK have a private owner. That is, someone can prove "title" to them, either by being the registered owner or by having conveyances showing exactly how the land has been bought and sold over a long period of time. Such ownership in no way affects the public highway rights, since the relevant "highway authority" (usually a local authority or the Highways Agency in England and Wales, or Amey Highways in Scotland) is deemed to own the surface of the highway, despite someone else's ownership of the land it passes over or under. Rights-of-way exist over all highways maintained at the public expense (the majority of roads) and also over some other ways which are not so maintained, on the principle of "once a highway, always a highway". In such cases, landowners must allow public use for "passing and repassing". A right-of-way may be created by custom (by the way being used for a long period of time) or under the relevant Sections of the Highways Act of 1980. A right-of-way may be extinguished or diverted in a number of ways, such as by an Act of Parliament, by a magistrates' stopping-up or diversion order, or by powers given to principal local authorities. For instance, under the Channel Tunnel Rail Link Act of 1996, authority was given for the builder of this railway link to stop up certain highways that are mentioned in Schedule 3 of the act. The opposite of a highway is a private road or pathway over which no rights-of-way exist. Any use of such private ways is subject to the consent of the owner of the land. Richard Mabey poses the origin of the word "highway" back to the Romans in his book "The Roadside Wildlife Book", 1974: "Daniel Defoe, writing in the 1720s, describes the Fosse Way as being raised eight or nine feet in many places. Between AD 40 and 80, the Romans laid something like 6,500 miles of highway. United States In the United States, "highway" is a general term for denoting a public way, including the entire area within the right-of-way, and includes many forms: a high-speed, limited-access road like expressways, two-lane expressway, freeways, and large toll highways. an important road that connects cities and large towns. any road or street, or a travel way of any kind, including pedestrian ways, trails, and navigable waterways, to which the public has a perpetual right of use. Note that the phrase "right-of-way" is used differently in the United States than it is in the United Kingdom and certain other places. In the U.S. a highway or road "right-of-way" means the land on which the pavement rests, plus the shoulders beside the pavements, plus any median strip, plus any other adjacent piece of land that is designated for the purposes of the highway or road. In other words, the "right-of-way" is the strip of land for the highway or road, and a sign that says, "No Parking on Right-of-Way" means that drivers may not park on the pavement or on the land adjacent to it. Many paved highways for vehicles are part of the official National Highway System of the U.S. Paved highways in the United States Numbered Highway System (for example, U.S. Highway 53) can vary from two lanes wide (one lane each direction), shoulderless, roads with no access control, to multi-lane high-speed controlled-access highway, such as the Interstate Highways. These roads are usually distinguished by being important, but not always the primary, routes that connect populated areas. (Sometimes, the primary route is a state highway.) Since their inception many decades ago, the construction of U.S. Highways, and their major improvements, have been paid for 50% with federal funds, especially from motor fuel taxes, and 50% with state funds from whatever tax resources that the state has. Thus, the system of U.S. Highways has always been an equal partnership between the federal government and the state governments. This was a plan that changed dramatically with the advent of the Interstate Highway System beginning in the 1950s, but do not forget that the system of U.S. Highways continued to be upgraded under the 50%-50% funding. Highways continue to be widened, old bridges continue to be replaced with newer and better ones, and so forth. The term Highways in the U.S. even includes major paved roads that serve purposes similar to those of the U.S. Highways or Interstate Highways, but which are completely designed, paid for, and maintained by state or local governments. An example of this is M-6 (Michigan highway), which is an urban bypass of Grand Rapids, Michigan, that is a multi-lane, controlled-access highway entirely designed and paid for by Michigan. Much of the traffic uses it to bypass downtown Grand Rapids to make connections between Interstate 96, Interstate 196 and U.S. Highway 131. When the Act of Congress that authorized the Interstate Highway System was passed and then signed by President Eisenhower, it was already clear that the Interstate Highways would be far more expensive, mile-for-mile, than the U.S. Highways had been. Because of their great cost, Congress decided to set the standard for federal funding for the Interstate System at 90%, leaving 10% for the states to pay for. Another monetary difference came from the fact that the Interstate Highways were to be designed to be high-speed and safe expressways. This meant that they needed to have much wider open strips of land along their sides, because this created safety zones on each side of the highways so that vehicles that were in accidents or simply lost control would have somewhere to go, to slow down gradually, and not crash into trees, boulders, light poles, buildings, parked vehicles, fire hydrants, and other kinds of obstacles. Roadway interchanges for Interstate Highways were also to be very large (and over the decades, they became a lot larger than anyone had anticipated in the 1950s). With so much land being taken away for the highways, the only way to justify it and to make it politically palatable was for the Federal and State governments to outright purchase all of the land. There could be no question of just having an easement for the highway and its right-of-way. All of the land within the right-of-way would be permanently owned by the governments, until such time that they decided to get rid of the highway and sell the land. In some places, highway is a synonym for road or street, and in some cases, the word highway is simply used in cases of carelessness and laziness on the part of the speaker, who believes that street, road, and highway are all synonymous and uses them accordingly. On the other hand, in another example, the California Motor Vehicle Code § 360 states: "'Highway' is a way or place of whatever nature, publicly maintained and open to the use of the public for purposes of vehicular travel. Highway includes street." The California Supreme Court has held that "the definition of 'highway' in the Vehicle Code is used for special purposes of that act," and that canals of the town of Venice, California, are "highways" also entitled to be maintained with state highway funds. The federal and state governments are trying to improve their National Highway System components by repaving highways, widening highways, replacing bridges, and reconstructing some interchanges. Many cloverleaf interchanges are being converted to parclo interchanges. Busy Diamond interchanges are also being converted to SPUIs (single-point-urban interchange) or to parclos to reduce interchange congestion. Arguably, the most famous United States highway is U.S. Route 66. It is immortalized in the song "(Get Your Kicks On) Route 66"", and by the TV series Route 66. Other famous highways in songs include [U.S.] Highway 61 (Bob Dylan, 1965), Carefree Highway in Arizona (Gordon Lightfoot, 1974), Colorado Boulevard in Pasadena, California (Jan & Dean, also Beach Boys, 1964), the song "Ventura Highway", named for a highway in Southern California ("America", 1972), and Blues Highway in Mississippi (Fred McDowell, 1959). Yemen Yemen has one of the oldest highway routes in the region. The first highway route was between Aden and Hadromout, with a two lane highway. Currently, Yemen has 71,300 kilometers of roads, of which only 6,200 kilometers are paved. Zimbabwe Zimbabwe has one of the better road networks in Africa that had been poorly maintained until recently. There has been an introduction of toll gates and the dualization of most of the major roads. References Highways
428522
https://en.wikipedia.org/wiki/Scorewriter
Scorewriter
A scorewriter, or music notation program is software for creating, editing and printing sheet music. A scorewriter is to music notation what a word processor is to text, in that they typically provide flexible editing and automatic layout, and produce high-quality printed results. Most scorewriters, especially those from the 2000s, can record notes played on a MIDI keyboard (or other MIDI instruments), and play music back via MIDI or virtual instruments. Playback is especially useful for novice composers and music students, and when musicians are not available or affordable. Several free programs are widely used, such as MuseScore. The three main professional-level programs are Finale, Sibelius and Dorico. Comparison with multitrack sequencer software Multitrack sequencer software and scorewriters typically employ different methods for notation input and display. Scorewriters are based on traditional music notation, using staff lines and round note heads, which originates from European classical music. They use symbols representing durations in sound and silence, dynamics, articulations and tempo. Some also allow users to import and/or create their own symbols. Multitrack sequencer software typically uses a multitrack recorder metaphor as the main interface, with multiple tracks and track segments. Individual tracks can be edited using graphic notation in the form of a "piano roll"-guided input for the control of MIDI-based hardware and software instruments. A third approach has also emerged that combines the first two input methods into a digital audio workstation, allowing users to score parts using traditional notation, the graphic notation of the piano roll, and recording acoustic or electronic instruments in real time alongside the existing scores. With all three methods, the computer keyboard, mouse, and a MIDI musical keyboard can be used to enter music that can then be edited with traditional or piano-roll-based notation. History The rapid growth of desktop computers in the 1980s saw the creation of dozens of early scorewriters (see list of scorewriters). They were a boon to young composers, music educators and composition students, providing a much less expensive way to create scores and parts for orchestral music and other works. However, they were hard to use; and while scores were readable, they did not look like professionally engraved scores or parts. An exception was SCORE notation software. Developed in the late '80s, it was used mostly by commercial publishers, as its price put it out of the reach of most non-professional composers/copyists. During the 1990s, many of these early programs fell into disuse, as newer programs surpassed them in ease of use and output quality. Finale and Sibelius were released, with high-quality output and a wide range of sophisticated features that made them suitable for almost all kinds of music applications. By 2000, the market was dominated by Finale (particularly in the US) and Sibelius (which had dominated the UK since 1993, and expanded worldwide after its Windows release in 1998). Inexpensive programs such as capella gained a significant share of the market in some countries. Sibelius and Finale still dominated the market as of 2012. In 2006, Sibelius was purchased by Avid. In a 2012 restructuring, Sibelius's London office was closed and the development team dismissed. In February 2013, Steinberg announced it had hired the former Sibelius team to create a new scorewriter, Dorico, which was released in October 2016. The trio of Finale, Sibelius and Dorico are today's leading professional-level programs. Functionality All scorewriters allow the user to input, edit and print music notation to varying degrees of sophistication. They range from programs which can write a simple song, piano piece or guitar tab, to those that can handle the complexities of orchestral music, specialist notations (from early music to avant-garde), and high-quality music engraving. Music can usually be input using the mouse, computer keyboard, or a MIDI keyboard. A few allow input by scanning scores using musical OCR; by playing or singing into a microphone; or by using a touch screen. Most scorewriters also allow users to play the music back, using MIDI or virtual instruments such as VST instruments. The screen can show at one time both the score and, by changing the colour of keys on a virtual piano's keyboard, the notes being played. Although sequencers can also write some musical notation, they are primarily for recording and playing music. Scorewriters can typically write more complex and sophisticated notation than sequencers can. Some scorewriters allow users to customize and fine-tune the printed output to a considerable degree, as is required by publishers to produce high-quality music engraving and to suit their individual house style. A few scorewriters allow users to publish scores on the Internet, where they can be (for example) played back, transposed, and printed out, perhaps for a fee. Most scorewriters provide other musical functions such as transposing; producing separate instrumental parts from a full score; or applying musical transformations such as retrograde. Some can automatically create instrumental exercises and student worksheets. Some support plug-ins, often developed by users or other companies. Other features may include version control, change tracking, graphics import and export, Post-It-like sticky notes, etc. File formats Almost all scorewriters use their own file formats for saving files. Hence, in order to move notation between different scorewriters (or to/from other kinds of music software such as sequencers), most scorewriters can also import or export one or more standard interchange file formats, such as: Standard MIDI File is supported by almost all scorewriters. However, as this format was designed for playback (e.g. by sequencers) rather than notation, it only produces approximate results and much notational information is lost in the process. If the score is to be presented, a WAV file (rather than MIDI) may be made from the score to give a more natural and accurate rendition of the written score. MusicXML has in recent years (as of 2012) become the standard interchange format for accurate notation. NIFF is a now-obsolete file format that was supported by a few scorewriters. This Comparison of scorewriters details which score writers can import and export to PDF, text (ASCII), picture (PNG, SVG, EMF) and sound (Vorbis OGG) file formats. There are also human-readable text-based formats such as ABC notation, LilyPond, ASCII tab and NoteWorthy Composer text files. These are easily rendered as speech by screen reading software. The to MediaWiki can render, and generate an audio preview of, the first two formats. See also Comparison of scorewriters International Music Score Library Project (IMSLP) Player piano Scorereader List of music software References External links Musical notation codes – information on most known musical notation file formats Comparison of 200 Music Fonts from Standard Notation Software List of typeset music formats, International Music Score Library Project Music software
5842363
https://en.wikipedia.org/wiki/Nimrod%20%28computer%29
Nimrod (computer)
The Nimrod, built in the United Kingdom by Ferranti for the 1951 Festival of Britain, was an early computer custom-built to play Nim, inspired by the earlier Nimatron. The twelve-by-nine-by-five-foot (3.7-by-2.7-by-1.5-meter) computer, designed by John Makepeace Bennett and built by engineer Raymond Stuart-Williams, allowed exhibition attendees to play a game of Nim against an artificial intelligence. The player pressed buttons on a raised panel corresponding with lights on the machine to select their moves, and the Nimrod moved afterward, with its calculations represented by more lights. The speed of the Nimrod's calculations could be reduced to allow the presenter to demonstrate exactly what the computer was doing, with more lights showing the state of the calculations. The Nimrod was intended to demonstrate Ferranti's computer design and programming skills rather than to entertain, though Festival attendees were more interested in playing the game than the logic behind it. After its initial exhibition in May, the Nimrod was shown for three weeks in October 1951 at the Berlin Industrial Show before being dismantled. The game of Nim running on the Nimrod is a candidate for one of the first video games, as it was one of the first computer games to have any sort of visual display of the game. It appeared only four years after the 1947 invention of the cathode-ray tube amusement device, the earliest known interactive electronic game to use an electronic display, and one year after Bertie the Brain, a computer similar to the Nimrod which played tic-tac-toe at the 1950 Canadian National Exhibition. The Nimrod's use of light bulbs rather than a screen with real-time visual graphics, however, much less moving graphics, does not meet some definitions of a video game. Development In the summer of 1951, the United Kingdom held the Festival of Britain, a national exhibition held throughout the UK to promote the British contribution to science, technology, industrial design, architecture, and the arts and to commemorate the centenary of the 1851 Great Exhibition. British engineering firm and nascent computer developer Ferranti promised to develop an exhibit for the Festival. In late 1950, John Makepeace Bennett, an Australian employee of the firm and recent Ph.D. graduate from the University of Cambridge, proposed that the company create a computer that could play the game of Nim. In Nim, players take turns removing at least one object from a set of objects, with the goal of being the player who removes the last object; gameplay options can be modeled mathematically. Bennett's suggestion was supposedly inspired by an earlier Nim-playing machine, "Nimatron", which had been displayed in 1940 at the New York World's Fair. The Nimatron machine had been designed by Edward Condon and constructed by Westinghouse Electric from electromechanical relays, and had weighed over a ton. Although Bennett's suggestion was a game, his goal was to show off the computer's ability to do mathematical calculations, as Nim is based on mathematical principles, and thus showcase Ferranti's computer design and programming skills rather than to entertain. Ferranti began work on building the computer on 1 December 1950, with engineer Raymond Stuart-Williams adapting the design by Bennett into a working machine. Development was completed by 12 April 1951, resulting in a device twelve feet wide, nine feet deep, and five feet tall. The majority of the volume was taken up by vacuum tubes and the light bulbs that displayed the state of the game, with the actual computer taking up no more than two percent of the total volume of the machine. The Nimrod took the form of a large box with panels of lights, with a raised stand in front of it with buttons corresponding with the lights, which in turn represented the objects the player could remove. The player would sit at the stand and press the buttons to make their moves, while one panel of lights showed the state of the game, and another showed the computer's calculations during its move. The computer could be set to make its calculations at various speeds, slowing down so that the demonstrator could describe exactly what the computer was doing in real time. A visual guide attached to the Nimrod explained what the computer was doing during its turn, as well as showing possible game states and how they would be represented by the lights. Signs stating which player's turn it was and whether one or the other had won would light up as appropriate during gameplay. Presentation On 5 May 1951, the Nimrod computer was presented at the Festival as the Nimrod Digital Computer, advertised as "faster than thought" and an "electronic brain". It exclusively played the game of Nim; moves were made by players seated at the raised stand, with the demonstrator sitting on the other side between the stand and the computer. Nimrod could play either the traditional or "reverse" form of the game. A short guidebook was sold to visitors for one shilling and sixpence explaining how computers worked, how the Nimrod worked, and advertising Ferranti's other developments. It explained that the use of a game to demonstrate the power of the machine did not mean that it was meant for entertainment and compared the mathematical underpinnings of Nim with modeling the economics of countries. Players of the Nimrod during the Festival included computer science pioneer Alan Turing. Although it was intended as a technology demonstration, most of the onlookers at the Festival of Britain were more interested in playing the game than in the programming and engineering logic behind it. Bennett claimed that "most of the public were quite happy to gawk at the flashing lights and be impressed." BBC Radio journalist Paul Jennings claimed that all of the festival attendees "came to a standstill" upon reaching the "frightful" "tremendous gray refrigerator". After the Festival, the Nimrod was showcased for three weeks in October at the Berlin Industrial Show, where it also drew crowds, including the West Germany economics minister Ludwig Erhard. It was then briefly shown in Toronto; afterwards, however, as it had served its purpose the Nimrod was dismantled. As the Nimrod was not intended as an entertainment product, it was not followed up by any future games, and Ferranti continued its work on designing general purpose computers. Nim was used as a demonstration program for several computers over the next few years, including the Norwegian NUSE (1954), Swedish SMIL (1956), Australian SILLIAC (1956), Polish Odra 1003 (Marienbad, 1962), Dutch Nimbi (1963), and French Antinéa (1963). The Nimrod was created only four years after the 1947 invention of the cathode-ray tube amusement device, the earliest known interactive electronic game, and one year after a similar purpose built game-playing machine, Bertie the Brain, the first computer-based game to feature a visual display of any sort. The Nimrod is considered under some definitions one of the first video games, possibly the second. While definitions vary, the prior cathode-ray tube amusement device was a purely analog electrical game, and while the Nimrod and Bertie did not feature an electronic screen they both had a game running on a computer. The software-based tic-tac-toe game OXO and a draughts program by Christopher Strachey were programmed a year later in 1952 and were the first computer games to display visuals on an electronic screen rather than through light bulbs. References External links The Nimrod Computer 1951 video games Early British computers Nimrod Festival of Britain History of computing in the United Kingdom Video games developed in the United Kingdom
60112682
https://en.wikipedia.org/wiki/2019%20Troy%20Trojans%20football%20team
2019 Troy Trojans football team
The 2019 Troy Trojans football team represented Troy University in the 2019 NCAA Division I FBS football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama, and competed in the East Division of the Sun Belt Conference. They were led by first-year head coach Chip Lindsey. Previous season The Trojans finished the 2018 season 10–3, 7–1 in Sun Belt play to finish in a tie for the East Division championship with Appalachian State. Due to their head-to-head loss to Appalachian State, the Trojans did not represent the East Division in the Sun Belt Championship Game. They received an invitation to the Dollar General Bowl where they defeated Buffalo. Head coach Neal Brown left at the conclusion of the season to become the head coach at West Virginia. On January 10, 2019, the school hired Kansas coordinator Chip Lindsey as head coach. Preseason Sun Belt coaches poll Preseason All-Sun Belt Teams Schedule Schedule Source: Game summaries Campbell Southern Miss at Akron Arkansas State at Missouri South Alabama at Georgia State at Coastal Carolina Georgia Southern at Texas State at Louisiana Appalachian State References Troy Troy Trojans football seasons Troy Trojans football
5626
https://en.wikipedia.org/wiki/Cognitive%20science
Cognitive science
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." The goal of cognitive science is to understand the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning and to develop intelligent devices. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution. History The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); and includes writers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke. However, although these early writers contributed greatly to the philosophical discovery of mind and this would ultimately lead to the development of psychology, they were working with an entirely different set of tools and core concepts than those of the cognitive scientist. The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks. Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation. The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition. In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego. In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input. Principles Levels of analysis A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis: The computational theory, specifying the goals of the computation; Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and The hardware implementation, or how algorithm and representation may be physically realized. Interdisciplinary nature Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural. Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition. Cognitive science: the term The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth conditional semantics. The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato. Scope Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states. Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field. Artificial intelligence Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See .) There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain. Attention Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it. Bodily processes related to cognition Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science. Knowledge and processing of language The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience. Memory Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")? Perception and action Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action. Consciousness Consciousness is the awareness of external objects and experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self. Research methods Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory. Behavioral experiments In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. Brain imaging Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience. Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution. Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution. Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution. Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains. Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields. Computational modeling Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid. Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer. Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level. Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning. All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.). Neurobiological methods Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system. Single-unit recording Direct brain stimulation Animal models Postmortem studies Key findings Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect. Criticism See Criticism of cognitive psychology. Notable researchers Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism. Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent. Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association. Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird. Epistemics Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge. Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated." In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs. In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics. See also Affective science Cognitive anthropology Cognitive biology Cognitive computing Cognitive ethology Cognitive linguistics Cognitive neuropsychology Cognitive neuroscience Cognitive psychology Cognitive science of religion Computational neuroscience Computational-representational understanding of mind Concept mining Decision field theory Decision theory Dynamicism Educational neuroscience Educational psychology Embodied cognition Embodied cognitive science Enactivism Epistemology Folk psychology Heterophenomenology Human Cognome Project Human–computer interaction Indiana Archives of Cognitive Science Informatics (academic field) List of cognitive scientists List of psychology awards Malleable intelligence Neural Darwinism Personal information management (PIM) Qualia Quantum cognition Simulated consciousness Situated cognition Society of Mind theory Spatial cognition Speech–language pathology Outlines Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more. Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more. References External links "Cognitive Science" on the Stanford Encyclopedia of Philosophy Cognitive Science Society Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences List of leading thinkers in cognitive science Interdisciplinary subfields Interdisciplinary branches of psychology
41393863
https://en.wikipedia.org/wiki/Darling%20%28software%29
Darling (software)
Darling is a free and open-source macOS compatibility layer for Linux. It duplicates functions of macOS by providing alternative implementations of the libraries and frameworks that macOS programs call. This method of duplication differs from other methods that might also be considered emulation, where macOS programs run in a virtual machine. Darling has been called the counterpart to WINE for running OS X apps. The project started in Summer 2012 and builds on a previous project, named maloader, which was discontinued due to a lack of time. The developer is testing applications, such as Midnight Commander or The Unarchiver on the layer. So far, the layer has been shown to work with many console apps, but does not currently support graphical applications. Darling does have the ability to extract an Apple Disk Image. The project may also support iOS applications in the future. Architecture At the entry of the Darling system is a loader for Mach-O binaries, the executable format for Apple's operating systems. Darling's predecessor, maloader, presented a maximalist approach to the problem by trying to replicate everything that Apple's dynamic library loader dyld does. This proved to be hard, and since a 2017 "Mach-O transition" Darling has been using a lightweight loader just enough to launch the open-source Apple dyld instead. To provide the macOS binaries with a kernel, Darling uses a modified XNU kernel wrapped into a Linux kernel module. This module handles the typical job of a Mach kernel, mainly Ports IPC handling. Some licensing issues exist in the darling-mach module, as the team are adding GNU GPL modifications to the APSL kernel. Higher than the kernel is the root environment. Darling, like WINE, supports chroot prefixes, implemented using the Linux overlayfs (as opposed to path translation in WINE). PID, IPC, and UTS namespaces are used to create a container for the Darwin system inside. The frameworks and system libraries in Darling are, to the best possible extent, based on source code released by Apple. The Mach-O transition allows these frameworks to be built more easily, because they are now built as the Mach-O format they were intended for. To fill in the gaps for many higher-level frameworks like Cocoa, Darling uses code from Cocotron, ApportableFoundation, and GNUstep. References Compatibility layers Free system software Linux emulation software Free software programmed in C 2013 software
3900832
https://en.wikipedia.org/wiki/Digital%20humanities
Digital humanities
Digital humanities (DH) is an area of scholarly activity at the intersection of computing or digital technologies and the disciplines of the humanities. It includes the systematic use of digital resources in the humanities, as well as the analysis of their application. DH can be defined as new ways of doing scholarship that involve collaborative, transdisciplinary, and computationally engaged research, teaching, and publishing. It brings digital tools and methods to the study of the humanities with the recognition that the printed word is no longer the main medium for knowledge production and distribution. By producing and using new applications and techniques, DH makes new kinds of teaching possible, while at the same time studying and critiquing how these impact cultural heritage and digital culture. DH is also applied in research. Thus, a distinctive feature of DH is its cultivation of a two-way relationship between the humanities and the digital: the field both employs technology in the pursuit of humanities research and subjects technology to humanistic questioning and interrogation, often simultaneously. Definition The definition of the digital humanities is being continually formulated by scholars and practitioners. Since the field is constantly growing and changing, specific definitions can quickly become outdated or unnecessarily limit future potential. The second volume of Debates in the Digital Humanities (2016) acknowledges the difficulty in defining the field: "Along with the digital archives, quantitative analyses, and tool-building projects that once characterized the field, DH now encompasses a wide range of methods and practices: visualizations of large image sets, 3D modeling of historical artifacts, 'born digital' dissertations, hashtag activism and the analysis thereof, alternate reality games, mobile makerspaces, and more. In what has been called 'big tent' DH, it can at times be difficult to determine with any specificity what, precisely, digital humanities work entails." Historically, the digital humanities developed out of humanities computing and has become associated with other fields, such as humanistic computing, social computing, and media studies. In concrete terms, the digital humanities embraces a variety of topics, from curating online collections of primary sources (primarily textual) to the data mining of large cultural data sets to topic modeling. Digital humanities incorporates both digitized (remediated) and born-digital materials and combines the methodologies from traditional humanities disciplines (such as rhetoric, history, philosophy, linguistics, literature, art, archaeology, music, and cultural studies) and social sciences, with tools provided by computing (such as hypertext, hypermedia, data visualisation, information retrieval, data mining, statistics, text mining, digital mapping), and digital publishing. Related subfields of digital humanities have emerged like software studies, platform studies, and critical code studies. Fields that parallel the digital humanities include new media studies and information science as well as media theory of composition, game studies, particularly in areas related to digital humanities project design and production, and cultural analytics. Each disciplinary field and each country has its own unique history of digital humanities. Berry and Fagerjord have suggested that a way to reconceptualise digital humanities could be through a "digital humanities stack". They argue that "this type of diagram is common in computation and computer science to show how technologies are 'stacked' on top of each other in increasing levels of abstraction. Here, [they] use the method in a more illustrative and creative sense of showing the range of activities, practices, skills, technologies and structures that could be said to make up the digital humanities, with the aim of providing a high-level map." Indeed, the "diagram can be read as the bottom levels indicating some of the fundamental elements of the digital humanities stack, such as computational thinking and knowledge representation, and then other elements that later build on these. " In practical terms, a major distinction within digital humanities is the focus on the data being processed. For processing textual data, digital humanities builds on a long and extensive history of digital edition, computational linguistics and natural language processing and developed an independent and highly specialized technology stack (largely cumulating in the specifications of the Text Encoding Initiative). This part of the field is sometimes thus set apart from Digital Humanities in general as 'digital philology' or 'computational philology'. For the creation and analysis of digital editions of objects or artifacts, digital philologists have access to digital practices, methods, and technologies such as optical character recognition that are providing opportunities to adapt the field to the digital age. History Digital humanities descends from the field of humanities computing, whose origins reach back to 1940s and 50s, in the pioneering work of Jesuit scholar Roberto Busa, which began in 1946, and of English professor Josephine Miles, beginning in the early 1950s. In collaboration with IBM, Busa and his team created a computer-generated concordance to Thomas Aquinas' writings known as the Index Thomisticus. Other scholars began using mainframe computers to automate tasks like word-searching, sorting, and counting, which was much faster than processing information from texts with handwritten or typed index cards. In the decades which followed archaeologists, classicists, historians, literary scholars, and a broad array of humanities researchers in other disciplines applied emerging computational methods to transform humanities scholarship. As Tara McPherson has pointed out, the digital humanities also inherit practices and perspectives developed through many artistic and theoretical engagements with electronic screen culture beginning the late 1960s and 1970s. These range from research developed by organizations such as SIGGRAPH to creations by artists such as Charles and Ray Eames and the members of E.A.T. (Experiments in Art and Technology). The Eames and E.A.T. explored nascent computer culture and intermediality in creative works that dovetailed technological innovation with art. The first specialized journal in the digital humanities was Computers and the Humanities, which debuted in 1966. The Computer Applications and Quantitative Methods in Archaeology (CAA) association was founded in 1973. The Association for Literary and Linguistic Computing (ALLC) and the Association for Computers and the Humanities (ACH) were then founded in 1977 and 1978, respectively. Soon, there was a need for a standardized protocol for tagging digital texts, and the Text Encoding Initiative (TEI) was developed. The TEI project was launched in 1987 and published the first full version of the TEI Guidelines in May 1994. TEI helped shape the field of electronic textual scholarship and led to Extensible Markup Language (XML), which is a tag scheme for digital editing. Researchers also began experimenting with databases and hypertextual editing, which are structured around links and nodes, as opposed to the standard linear convention of print. In the nineties, major digital text and image archives emerged at centers of humanities computing in the U.S. (e.g. the Women Writers Project, the Rossetti Archive, and The William Blake Archive), which demonstrated the sophistication and robustness of text-encoding for literature. The advent of personal computing and the World Wide Web meant that Digital Humanities work could become less centered on text and more on design. The multimedia nature of the internet has allowed Digital Humanities work to incorporate audio, video, and other components in addition to text. The terminological change from "humanities computing" to "digital humanities" has been attributed to John Unsworth, Susan Schreibman, and Ray Siemens who, as editors of the anthology A Companion to Digital Humanities (2004), tried to prevent the field from being viewed as "mere digitization". Consequently, the hybrid term has created an overlap between fields like rhetoric and composition, which use "the methods of contemporary humanities in studying digital objects", and digital humanities, which uses "digital technology in studying traditional humanities objects". The use of computational systems and the study of computational media within the humanities, arts and social sciences more generally has been termed the 'computational turn'. In 2006 the National Endowment for the Humanities (NEH) launched the Digital Humanities Initiative (renamed Office of Digital Humanities in 2008), which made widespread adoption of the term "digital humanities" in the United States. Digital humanities emerged from its former niche status and became "big news" at the 2009 MLA convention in Philadelphia, where digital humanists made "some of the liveliest and most visible contributions" and had their field hailed as "the first 'next big thing' in a long time." In November 2018, the 10th Global Peter Drucker Forum was about the theme: “Management. The human dimension”. Among the articles presented the one that left its mark in the field of digital humanities was: Values and methods Although digital humanities projects and initiatives are diverse, they often reflect common values and methods. These can help in understanding this hard-to-define field. Values Critical and theoretical Iterative and experimental Collaborative and distributed Multimodal and performative Open and accessible Methods Enhanced critical curation Augmented editions and fluid textuality Scale: the law of large numbers Distant/close, macro/micro, surface/depth Cultural analytics, aggregation, and data-mining Visualization and data design Locative investigation and thick mapping The animated archive Distributed knowledge production and performative access Humanities gaming Code, software, and platform studies Database documentaries Repurposable content and remix culture Pervasive infrastructure Ubiquitous scholarship In keeping with the value of being open and accessible, many digital humanities projects and journals are open access and/or under Creative Commons licensing, showing the field's "commitment to open standards and open source." Open access is designed to enable anyone with an internet-enabled device and internet connection to view a website or read an article without having to pay, as well as share content with the appropriate permissions. Digital humanities scholars use computational methods either to answer existing research questions or to challenge existing theoretical paradigms, generating new questions and pioneering new approaches. One goal is to systematically integrate computer technology into the activities of humanities scholars, as is done in contemporary empirical social sciences. Yet despite the significant trend in digital humanities towards networked and multimodal forms of knowledge, a substantial amount of digital humanities focuses on documents and text in ways that differentiate the field's work from digital research in media studies, information studies, communication studies, and sociology. Another goal of digital humanities is to create scholarship that transcends textual sources. This includes the integration of multimedia, metadata, and dynamic environments (see The Valley of the Shadow project at the University of Virginia, the Vectors Journal of Culture and Technology in a Dynamic Vernacular at University of Southern California, or Digital Pioneers projects at Harvard). A growing number of researchers in digital humanities are using computational methods for the analysis of large cultural data sets such as the Google Books corpus. Examples of such projects were highlighted by the Humanities High Performance Computing competition sponsored by the Office of Digital Humanities in 2008, and also by the Digging Into Data challenge organized in 2009 and 2011 by NEH in collaboration with NSF, and in partnership with JISC in the UK, and SSHRC in Canada. In addition to books, historical newspapers can also be analyzed with big data methods. The analysis of vast quantities of historical newspaper content has showed how periodic structures can be automatically discovered, and a similar analysis was performed on social media. As part of the big data revolution, gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents and historical documents written in literary Chinese. Digital humanities is also involved in the creation of software, providing "environments and tools for producing, curating, and interacting with knowledge that is 'born digital' and lives in various digital contexts." In this context, the field is sometimes known as computational humanities. Tools Digital humanities scholars use a variety of digital tools for their research, which may take place in an environment as small as a mobile device or as large as a virtual reality lab. Environments for "creating, publishing and working with digital scholarship include everything from personal equipment to institutes and software to cyberspace." Some scholars use advanced programming languages and databases, while others use less complex tools, depending on their needs. DiRT (Digital Research Tools Directory) offers a registry of digital research tools for scholars. TAPoR (Text Analysis Portal for Research) is a gateway to text analysis and retrieval tools. An accessible, free example of an online textual analysis program is Voyant Tools, which only requires the user to copy and paste either a body of text or a URL and then click the 'reveal' button to run the program. There is also an online list of online or downloadable Digital Humanities tools that are largely free, aimed toward helping students and others who lack access to funding or institutional servers. Free, open source web publishing platforms like WordPress and Omeka are also popular tools. Projects Digital humanities projects are more likely than traditional humanities work to involve a team or a lab, which may be composed of faculty, staff, graduate or undergraduate students, information technology specialists, and partners in galleries, libraries, archives, and museums. Credit and authorship are often given to multiple people to reflect this collaborative nature, which is different from the sole authorship model in the traditional humanities (and more like the natural sciences). There are thousands of digital humanities projects, ranging from small-scale ones with limited or no funding to large-scale ones with multi-year financial support. Some are continually updated while others may not be due to loss of support or interest, though they may still remain online in either a beta version or a finished form. The following are a few examples of the variety of projects in the field: Digital archives The Women Writers Project (begun in 1988) is a long-term research project to make pre-Victorian women writers more accessible through an electronic collection of rare texts. The Walt Whitman Archive (begun in the 1990s) sought to create a hypertext and scholarly edition of Whitman's works and now includes photographs, sounds, and the only comprehensive current bibliography of Whitman criticism. The Emily Dickinson Archive (begun in 2013) is a collection of high-resolution images of Dickinson's poetry manuscripts as well as a searchable lexicon of over 9,000 words that appear in the poems. The Slave Societies Digital Archive (formerly Ecclesiastical and Secular Sources for Slave Societies), directed by Jane Landers and hosted at Vanderbilt University, preserves endangered ecclesiastical and secular documents related to Africans and African-descended peoples in slave societies. This Digital Archive currently holds 500,000 unique images, dating from the 16th to the 20th centuries, and documents the history of between 6 and 8 million individuals. They are the most extensive serial records for the history of Africans in the Atlantic World and also include valuable information on the indigenous, European, and Asian populations who lived alongside them. The involvement of librarians and archivists plays an important part in digital humanities projects because of the recent expansion of their role so that it now covers digital curation, which is critical in the preservation, promotion, and access to digital collections, as well as the application of scholarly orientation to digital humanities projects. A specific example involves the case of initiatives where archivists help scholars and academics build their projects through their experience in evaluating, implementing, and customizing metadata schemas for library collections. The initiatives at the National Autonomous University of Mexico is another example of a digital humanities project. These include the digitization of 17th-century manuscripts, an electronic corpus of Mexican history from the 16th to 19th century, and the visualization of pre-Hispanic archaeological sites in 3-D. Cultural analytics "Cultural analytics" refers to the use of computational method for exploration and analysis of large visual collections and also contemporary digital media. The concept was developed in 2005 by Lev Manovich who then established the Cultural Analytics Lab in 2007 at Qualcomm Institute at California Institute for Telecommunication and Information (Calit2). The lab has been using methods from the field of computer science called Computer Vision many types of both historical and contemporary visual media—for example, all covers of Time magazine published between 1923 and 2009, 20,000 historical art photographs from the collection in Museum of Modern Art (MoMA) in New York, one million pages from Manga books, and 16 million images shared on Instagram in 17 global cities. Cultural analytics also includes using methods from media design and data visualization to create interactive visual interfaces for exploration of large visual collections e.g., Selfiecity and On Broadway. Cultural analytics research is also addressing a number of theoretical questions. How can we "observe" giant cultural universes of both user-generated and professional media content created today, without reducing them to averages, outliers, or pre-existing categories? How can work with large cultural data help us question our stereotypes and assumptions about cultures? What new theoretical cultural concepts and models are required for studying global digital culture with its new mega-scale, speed, and connectivity? The term "cultural analytics" (or "culture analytics") is now used by many other researchers, as exemplified by two academic symposiums, a four-month long research program at UCLA that brought together 120 leading researchers from university and industry labs, an academic peer-review Journal of Cultural Analytics: CA established in 2016, and academic job listings. Textual mining, analysis, and visualization WordHoard (begun in 2004) is a free application that enables scholarly but non-technical users to read and analyze, in new ways, deeply-tagged texts, including the canon of Early Greek epic, Chaucer, Shakespeare, and Spenser. The Republic of Letters (begun in 2008) seeks to visualize the social network of Enlightenment writers through an interactive map and visualization tools. Network analysis and data visualization is also used for reflections on the field itself – researchers may produce network maps of social media interactions or infographics from data on digital humanities scholars and projects. Document in Context of its Time (DICT) analysis style and an online demo tool allow in an interactive way let users know whether the vocabulary used by an author of an input text was frequent at the time of text creation, whether the author used anachronisms or neologisms, and enables detecting terms in text that underwent considerable semantic change. Analysis of macroscopic trends in cultural change Culturomics is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts. Researchers data mine large digital archives to investigate cultural phenomena reflected in language and word usage. The term is an American neologism first described in a 2010 Science article called Quantitative Analysis of Culture Using Millions of Digitized Books, co-authored by Harvard researchers Jean-Baptiste Michel and Erez Lieberman Aiden. A 2017 study published in the Proceedings of the National Academy of Sciences of the United States of America compared the trajectory of n-grams over time in both digitised books from the 2010 Science article with those found in a large corpus of regional newspapers from the United Kingdom over the course of 150 years. The study further went on to use more advanced natural language processing techniques to discover macroscopic trends in history and culture, including gender bias, geographical focus, technology, and politics, along with accurate dates for specific events. Online publishing The Stanford Encyclopedia of Philosophy (begun in 1995) is a dynamic reference work of terms, concepts, and people from philosophy maintained by scholars in the field. MLA Commons offers an open peer-review site (where anyone can comment) for their ongoing curated collection of teaching artifacts in Digital Pedagogy in the Humanities: Concepts, Models, and Experiments (2016). The Debates in the Digital Humanities platform contains volumes of the open-access book of the same title (2012 and 2016 editions) and allows readers to interact with material by marking sentences as interesting or adding terms to a crowdsourced index. Wikimedia projects Some research institutions work with the Wikimedia Foundation or volunteers of the community, for example, to make freely licensed media files available via Wikimedia Commons or to link or load data sets with Wikidata. Text analysis has been performed on the contribution history of articles on Wikipedia or its sister projects. Criticism In 2012, Matthew K. Gold identified a range of perceived criticisms of the field of digital humanities: "'a lack of attention to issues of race, class, gender, and sexuality; a preference for research-driven projects over pedagogical ones; an absence of political commitment; an inadequate level of diversity among its practitioners; an inability to address texts under copyright; and an institutional concentration in well-funded research universities". Similarly Berry and Fagerjord have argued that a digital humanities should "focus on the need to think critically about the implications of computational imaginaries, and raise some questions in this regard. This is also to foreground the importance of the politics and norms that are embedded in digital technology, algorithms and software. We need to explore how to negotiate between close and distant readings of texts and how micro-analysis and macro-analysis can be usefully reconciled in humanist work." Alan Liu has argued, "while digital humanists develop tools, data, and metadata critically, therefore (e.g., debating the 'ordered hierarchy of content objects' principle; disputing whether computation is best used for truth finding or, as Lisa Samuels and Jerome McGann put it, 'deformance'; and so on) rarely do they extend their critique to the full register of society, economics, politics, or culture." Some of these concerns have given rise to the emergent subfield of Critical Digital Humanities (CDH): Some key questions include: how do we make the invisible become visible in the study of software? How is knowledge transformed when mediated through code and software? What are the critical approaches to Big Data, visualization, digital methods, etc.? How does computation create new disciplinary boundaries and gate-keeping functions? What are the new hegemonic representations of the digital – 'geons', 'pixels', 'waves', visualization, visual rhetorics, etc.? How do media changes create epistemic changes, and how can we look behind the 'screen essentialism' of computational interfaces? Here we might also reflect on the way in which the practice of making-visible also entails the making-invisible – computation involves making choices about what is to be captured. Negative publicity Lauren F. Klein and Gold note that many appearances of the digital humanities in public media are often in a critical fashion. Armand Leroi, writing in The New York Times, discusses the contrast between the algorithmic analysis of themes in literary texts and the work of Harold Bloom, who qualitatively and phenomenologically analyzes the themes of literature over time. Leroi questions whether or not the digital humanities can provide a truly robust analysis of literature and social phenomena or offer a novel alternative perspective on them. The literary theorist Stanley Fish claims that the digital humanities pursue a revolutionary agenda and thereby undermine the conventional standards of "pre-eminence, authority and disciplinary power". However, digital humanities scholars note that "Digital Humanities is an extension of traditional knowledge skills and methods, not a replacement for them. Its distinctive contributions do not obliterate the insights of the past, but add and supplement the humanities' long-standing commitment to scholarly interpretation, informed research, structured argument, and dialogue within communities of practice". Some have hailed the digital humanities as a solution to the apparent problems within the humanities, namely a decline in funding, a repeat of debates, and a fading set of theoretical claims and methodological arguments. Adam Kirsch, writing in the New Republic, calls this the "False Promise" of the digital humanities. While the rest of humanities and many social science departments are seeing a decline in funding or prestige, the digital humanities has been seeing increasing funding and prestige. Burdened with the problems of novelty, the digital humanities is discussed as either a revolutionary alternative to the humanities as it is usually conceived or as simply new wine in old bottles. Kirsch believes that digital humanities practitioners suffer from problems of being marketers rather than scholars, who attest to the grand capacity of their research more than actually performing new analysis and when they do so, only performing trivial parlor tricks of research. This form of criticism has been repeated by others, such as in Carl Staumshein, writing in Inside Higher Education, who calls it a "Digital Humanities Bubble". Later in the same publication, Straumshein alleges that the digital humanities is a 'Corporatist Restructuring' of the Humanities. Some see the alliance of the digital humanities with business to be a positive turn that causes the business world to pay more attention, thus bringing needed funding and attention to the humanities. If it were not burdened by the title of digital humanities, it could escape the allegations that it is elitist and unfairly funded. Black box There has also been critique of the use of digital humanities tools by scholars who do not fully understand what happens to the data they input and place too much trust in the "black box" of software that cannot be sufficiently examined for errors. Johanna Drucker, a professor at UCLA Department of Information Studies, has criticized the "epistemological fallacies" prevalent in popular visualization tools and technologies (such as Google's n-gram graph) used by digital humanities scholars and the general public, calling some network diagramming and topic modeling tools "just too crude for humanistic work." The lack of transparency in these programs obscures the subjective nature of the data and its processing, she argues, as these programs "generate standard diagrams based on conventional algorithms for screen display ... mak[ing] it very difficult for the semantics of the data processing to be made evident." Diversity There has also been some recent controversy among practitioners of digital humanities around the role that race and/or identity politics plays. Tara McPherson attributes some of the lack of racial diversity in digital humanities to the modality of UNIX and computers themselves. An open thread on DHpoco.org recently garnered well over 100 comments on the issue of race in digital humanities, with scholars arguing about the amount that racial (and other) biases affect the tools and texts available for digital humanities research. McPherson posits that there needs to be an understanding and theorizing of the implications of digital technology and race, even when the subject for analysis appears not to be about race. Amy E. Earhart criticizes what has become the new digital humanities "canon" in the shift from websites using simple HTML to the usage of the TEI and visuals in textual recovery projects. Works that has been previously lost or excluded were afforded a new home on the internet, but much of the same marginalizing practices found in traditional humanities also took place digitally. According to Earhart, there is a "need to examine the canon that we, as digital humanists, are constructing, a canon that skews toward traditional texts and excludes crucial work by women, people of color, and the LGBTQ community." Issues of access Practitioners in digital humanities are also failing to meet the needs of users with disabilities. George H. Williams argues that universal design is imperative for practitioners to increase usability because "many of the otherwise most valuable digital resources are useless for people who are—for example—deaf or hard of hearing, as well as for people who are blind, have low vision, or have difficulty distinguishing particular colors." In order to provide accessibility successfully, and productive universal design, it is important to understand why and how users with disabilities are using the digital resources while remembering that all users approach their informational needs differently. Cultural criticism Digital humanities have been criticized for not only ignoring traditional questions of lineage and history in the humanities, but lacking the fundamental cultural criticism that defines the humanities. However, it remains to be seen whether or not the humanities have to be tied to cultural criticism, per se, in order to be the humanities. The sciences might imagine the Digital Humanities as a welcome improvement over the non-quantitative methods of the humanities and social sciences. Difficulty of evaluation As the field matures, there has been a recognition that the standard model of academic peer-review of work may not be adequate for digital humanities projects, which often involve website components, databases, and other non-print objects. Evaluation of quality and impact thus require a combination of old and new methods of peer review. One response has been the creation of the DHCommons Journal. This accepts non-traditional submissions, especially mid-stage digital projects, and provides an innovative model of peer review more suited for the multimedia, transdisciplinary, and milestone-driven nature of Digital Humanities projects. Other professional humanities organizations, such as the American Historical Association and the Modern Language Association, have developed guidelines for evaluating academic digital scholarship. Lack of focus on pedagogy The 2012 edition of Debates in the Digital Humanities recognized the fact that pedagogy was the "neglected 'stepchild' of DH" and included an entire section on teaching the digital humanities. Part of the reason is that grants in the humanities are geared more toward research with quantifiable results rather than teaching innovations, which are harder to measure. In recognition of a need for more scholarship on the area of teaching, the edited volume Digital Humanities Pedagogy was published and offered case studies and strategies to address how to teach digital humanities methods in various disciplines. See also Cyborg anthropology Digital anthropology References External links Debates in the Digital Humanities book series Digital Humanities Quarterly Intro to Digital Humanities by UCLA Center for Digital Humanities CUNY Digital Humanities Resource Guide by CUNY Digital Humanities Initiative DH Toychest: Guides and Introductions curated by DH scholar Alan Liu How did they make that? by DH scholar Miriam Posner
8233065
https://en.wikipedia.org/wiki/2006%20Dalit%20protests%20in%20Maharashtra
2006 Dalit protests in Maharashtra
In November and December 2006, the desecration of an Ambedkar statue in Kanpur triggered off violent protests by Dalits in Maharashtra, India. Background There was resentment among the Dalit in Maharashtra, due to murder of four Dalits, allegedly by a mob of Kunbis in Khairlanji village in September 2006. On 28 November 2006, the brewing resentment in the Dalit community in Maharashtra took form of violent protests, when a statue of Dalit icon B. R. Ambedkar was desecrated by a vandal in Kanpur. Several people remarked that the protests were fueled by the Khairlanji killings, including the Maharashtra Chief Minister Vilasrao Deshmukh, and the Mumbai Police Commissioner A. N. Roy. According to The Hindu, the political parties had not responded appropriately to the outrage over the Kherjlanji killings, resulting in heightened tensions. Later, the Maharashtra Navnirman Sena (MNS) chief Raj Thackeray claimed that the protests were stoked by certain political parties in their bid to oust Maharashtra Home Minister R. R. Patil. Protests On 30 November 2006, violent protests took place in several places in Maharashtra. The Dalit protestors set three trains on fire, damaged over 100 buses and clashed with police. North Maharashtra In Osmanabad, two persons were killed in police firing on a protesting mob. Two more deaths were reported, one each in Nanded and Nashik, during the violent protests. Subsequently, a curfew was imposed in Osmanabad, Nanded and Nandurbar towns of Maharashtra. In Aurangabad, a crowd of 1,000 Dalit gathered to protest against desecration. Some of the protestors started pelting stones at passing vehicles, injuring six persons, including sub-inspector and a constable. The police resorted to firing in air to disperse the crowd. In Akola, a truck was set on fire on the national highway, and there was heavy stone-pelting on State Transport buses. Heavy deployment of police forces took place in affected areas. Around 1,500 people were put under preventive arrest and three persons were detained in connection with the lynching of a youth in Nashik. In Akola, the police arrested 14 persons for burning an effigy of chief minister Vilasrao Deshmukh. Pune District In Pune and Pimpri-Chinchwad areas, 60 vehicles were damaged and set ablaze by agitators and 13 policemen were injured. A curfew was imposed in Pimpri on 30 November. On 1 December, a municipal corporation bus was stoned at Bopodi chowk in Pimpri-Chinchwad. Mumbai and its neighborhoods On 30 November, a mob of over 6000 protestors stopped the Deccan Queen passenger train near Ulhasnagar, asked the passengers to alight and set afire its five bogies. One compartment of a local train was set ablaze at Matunga in Mumbai. There were no injuries. Some compartments of a commuter train were also torched at Ulhasnagar, and the police fired in the air to control the violent crowds. The mob in Ulhasnagar also vandalised the railway station. Suburban train services were affected in parts of Mumbai as protestors squatted on the tracks. Shops and establishments in the city were also closed in view of the protests. Incidents of protestors setting up road blocks and pelting stones were reported in Mumbai suburbs like Kanjurmarg, Mulund, Bhandup, Trombay, Kurla, Kalina, Chembur, Kurar in Malad, Goregaon, Pali Hill in Bandra, and Worli. The police reported that gangster Chhota Rajan's brother Deepak Nikhalje was responsible for violence in Chembur. Police used lathi charge and fired in the air at Kherwadi junction on the Western Express highway in Vakola, after an angry mob blocked traffic and indulged in stone pelting. In Thane, corporation-run buses were off the road due to stone pelting. A Municipal Transport Corporation bus going from Kalyan to Dombivili was set on fire at Manpada by a violent mob. Protestors also forced owners of shops and establishments to down shutters. Over 100 buses and 35 private vehicles were damaged in stone pelting. The Mumbai Police Commissioner A. N. Roy put the loss at around Rs 30 lakh. BEST said 91 of its buses were damaged and four drivers and a woman passenger injured in stone pelting. At least 13 policemen, including Additional Commissioner of Police K. L. Bishnoi, were injured in the protests. 176 people were arrested in Mumbai. The Thane police arrested 19 persons and detained another 29. Outside Maharashtra The protests also spread to some parts of Maharashtra's neighboring states, Gujarat and Karnataka. In Surat, Gujarat; a mob pelted stones and damaged vehicles. Eight persons were arrested in connection with the violence. In Hubli, Karnataka; activists belonging to various Dalit organizations stoned a dozen city buses. Arrest of the vandal Many Dalit leaders, including the UP RPI vice-president S. R. Darapuri remarked that the desecration of Ambedkar statue displayed the deep-seated animosity towards Dalits in India. Janata Party president Subramanian Swamy claimed that the desecration was the work of "anti-national" elements. Later, the Kanpur Police arrested a Dalit youth Arun Kumar Balmiki for desecrating the Ambedkar statue. According to the police, the youth had "admitted to having damaged the statue in a drunken state along with two friends". Earlier in a similar case, a Dalit youth was held for desecrating an Ambedkar statue in Gulbarga, Karnataka. However, some Dalits in Kanpur alleged that the youth was falsely implicated to protect the real culprits. Some Dalits protesting against Balmiki's arrest damaged vehicles and blocked traffic in Kanpur. Meanwhile, the old desecrated statue in Kanpur was buried with full honors and quickly replaced with a new one. Political fallout The Maharashtra Chief Minister Vilasrao Deshmukh requested Dalit leaders to maintain calm. The Police Commissioner of Mumbai, A. N. Roy, requested the state government to declare a holiday on December 6 (Dr. Ambedkar's death anniversary), but the Government decided against doing so. Deshmukh also called an all-party meeting. The Congress president Sonia Gandhi also pitched in to settle the issue. Raj Thackeray accused the "anti-R. R. Patil forces" of fueling the riots. He also drew attention to another incident in Khairlanji, in which a Dalit man allegedly raped a girl and killed her. Thackeray demanded action on those responsible for the rape and the subsequent death of the girl, and also remarked that nobody helped the girl's family. In Kanpur, a Congress delegation, led by former bureaucrat P. L. Punia sat on a dharna (strike), when the District Magistrate and the Senior Superintendent of Police prevented them from visiting the site of desecration. RPI president Ramdas Athawale also reached Kanpur to visit the site of desecration. Earlier, he had said that he will hold protests in Kanpur against the "heinous" act. However, he alleged house arrest, after police put him under tight security in a local circuit house in Kanpur. At the 22nd National Conference of Dalit Writers in New Delhi, the former Governor of Arunachal Pradesh, Mata Prasad declared that the agitation will continue through Dalit literature. References See also Vandalism of Ambedkar statues Dalit protests in Maharashtra Maharashtra Maharashtra Protests in India Dalit politics Dalit history History of Maharashtra (1947–present) Caste-related violence in India Dalit protests in Maharashtra November 2006 events in Asia December 2006 events in Asia
7094413
https://en.wikipedia.org/wiki/Sharp%20PC-E500S
Sharp PC-E500S
The Sharp PC-E500S was a 1995 pocket computer by Sharp Corporation and was the successor to the 1989 PC-E500 model, featuring a 2.304 MHz CMOS CPU. Description It was slightly wider, and the keys are slightly larger than the previous model. The display had more contrast, and the keyboard cover is a (removable) hinged lid (clamshell) instead of plastic slipcase. There were also four additional BASIC commands. Programs written on PC-E500 were executable on the PC-E500S. It came with 32 KB of RAM which could be upgraded to 96 KB using memory expansion cards. The monochrome LCD had 240×32 pixels which could display four lines with 40 characters per line as well as graphics. The 256 KB system ROM the contained the BIOS, a diagnostic suite, and the BASIC interpreter used to program the device. An algebraic calculation system was included. The Algebraic Expression Reserve (AER) memory: Frequently used formulas or constants could be stored in memory and recalled for repeated use. The PC-E500 series also performed as a scientific calculator when switched into 'CAL' mode. It also included an X<>Y exchange key for working with complex numbers and polar to rectangular conversions. Applications Mathematics (integers, equations, differential & integral calculus, formulas and graphs) Physics Earth sciences Meteorology Chemistry Biology Geology Electrical engineering Mechanical engineering In addition things like amino acids and the periodic table of elements were available. These built-in programs were accessed through a menu system and special function keys. There was also a built-in menu editor to add new software to the menus or indeed replace some built-in software or formulas. Operating modes BASIC (programming and execution) CAL (scientific calculator) MATRIX (matrices calculations) STAT (statistics) ENG (engineering) Accuracy 10 digits (mantissa) + 2 digits (exponent) in single-precision mode. 20 digits (mantissa) + 2 digits (exponent) in double-precision mode. In the CAL, MATRIX and STAT modes, only the single precision mode can be used. Memory expansion The Sharp PC-E500 series could store data and programs on memory expansion cards as well as the main RAM. Six cards were available: CE-210M: 2 KB CE-211M: 4 KB CE-212M: 8 KB CE-2H16M: 16 KB CE-2H32M: 32 KB CE-2H64M: 64 KB These cards used a CR1616 lithium battery for memory backup. The memory configuration was software-switchable from the command-line. The RAM card could be appended to the system memory, replace the system memory or act as a separate space to be used as a RAM drive (F:). The main memory could also be partitioned off to a RAM drive (E:). Peripherals : Thermal printer & cassette interface. CE-140F: 2.5-inch pocket floppy drive. CE-130T: RS-232 adaptor level converter. CE-135T: RS-232 adaptor level converter. (Macintosh) CE-515: 4-color X/Y plotter printer The PC-E500S had a weight of 340 g (with batteries) and was powered by four AAA batteries. It could, given its power consumption of 0.09 W, be used for about 70 hours on a charge. Variants PC-E500 (English): 32 KB, engineer software, double precision, slipcase, rubber keys, black, 1988/1989 PC-E500 (Japanese): 32 KB, engineer software, double precision, Kanji, slipcase, rubber keys, black, 1988 PC-E500PJ / PC-E500-BL (Japanese): 32 KB, engineer software, game "HEAVY METAL mini" (by CRISIS Software) preloaded into RAM, double precision, Kanji, slipcase, rubber keys, blue, 1990, limited special edition by Pokecom Journal (PJ) PC-E500S (English): 32 KB, engineer software, double precision, high contrast display, clamshell, plastic keys, black, 1995 PC-E550 (Japanese): 64 KB, engineer software, double precision, Kanji, slipcase, rubber keys, white, 1990 PC-E650 (Japanese): 64 KB, engineer software, double precision, structured BASIC, Kanji, clamshell, plastic keys, black, 1993 PC-1480U (Japanese): 32 KB, no engineer software, "coop uni" label, double precision, Kanji, slipcase, rubber keys, black, 1988 PC-1490U (Japanese): 32 KB, no engineer software, "coop uni" label, double precision, Kanji, slipcase, rubber keys, black, 1990 PC-1490UII (Japanese): 64 KB, no engineer software, "UNIV. TOOL" label, double precision, Kanji, slipcase, rubber keys?, black, 1991 PC-U6000 (Japanese): 64 KB, no engineer software, "UNIV. TOOL" label, double precision, Kanji, clamshell, plastic keys, black, 1993 See also Sharp pocket computer character sets References PC-E500S PC-E500S
401744
https://en.wikipedia.org/wiki/Plessey%20System%20250
Plessey System 250
Plessey System 250, also known as PP250, was the first operational computer to implement capability-based addressing, to check and balance the computation as a pure Church–Turing machine. Description A Church–Turing machine is a digital computer that encapsulates the symbols in a thread of computation as a chain of protected abstractions by enforcing the dynamic binding laws of Alonzo Church's lambda calculus Other capability based computers, which include CHERI and CAP computers, are hybrids. They retain default instructions that can access every word of accessible physical or logical (paged) memory. It is an unavoidable characteristic of the von Neumann architecture that is founded on shared random access memory and trust in the sharing default access rights. For example, every word in every page managed by the virtual memory manager in an operating system using a memory management unit (MMU) must be trusted. Using a default privilege among many compiled programs allows corruption to grow without any method of error detection. However, the range of virtual addresses given to the MMU or the range of physical addresses produced by the MMU is shared undetected corruption flows across the shared memory space from one software function to another. PP250 removed not only virtual memory or any centralized, precompiled operating system, but also the superuser, removing all default machine privileges. It is default privileges that empower undetected malware and hacking in a computer. Instead, the pure object capability model of PP250 always requires a limited capability key to define the authority to operate. PP250 separated binary data from capability data to protect access rights, simplify the computer and speed garbage collection. The Church machine encapsulates and context limits the Turing machine by enforcing the laws of the lambda calculus. The typed digital media is program controlled by distinctly different machine instructions. Mutable binary data is programmed by 28 RISC instruction set for Imperative programming and procedural programming the binary data using binary data registers confined to a capability limited memory segment. The immutable capability keys, exclusive to six Church instructions, navigate the computational context of a Turing machine through the separately programmed structure of the object-capability model. Immutable capability keys represent named lambda calculus variables. This Church side is a lambda calculus meta-machine. The other side is an object-oriented machine of binary objects, programmed functions, capability lists defining function abstractions, storage for threads of computation (lambda calculus applications) or storage for the list of capability keys in a namespace. The laws of the lambda calculus are implemented by the Church instructions with micro-programmed access to the reserved (hidden) capability registers. The software is incrementally assembled as object-oriented machine-code linked by the capability keys. The structure of function abstractions, including those for memory management, input, and output, scheduling and communication services are protected as private frames in a thread. Threads computer inline or as parallel computations activated by program controlled Church instruction. Conceptually, the PP250 operates at the dead center of the Church–Turing Thesis as a digitally secure, functional Church–Turing Machine for trusted software. As a real-time controller, PP250 provided fail-safe software applications for computerized telephone and military communication systems with decades of software and hardware reliability. Capability limited addressing detects and recovers from errors on contact without any harmful corruption or information theft. Furthermore, no unfair, default privileges exist for an operating system or a superuser, thereby blocking all hacking and malware. The multiprocessing hardware architecture and the dynamically bound, type limited memory, exclusively accessed through capability limited addressing, replace the statically bound, page based linear compilations with dynamically bound instructions, crosschecked and authorized at run time. By checking all memory references as an offset within the base, limit, and access types specified bugs, errors and attacks are detected by the type limited capability register. The imperative Turing commands must bind to binary data objects as defined by the selected capability register. The access rights of the selected capability register must approve data access rights (Read Binary Data, Write Binary Data or Execute Machine Code). On the other hand, functional Church instructions are bound dynamically to a capability key in a capability list held in a capability register with capability access rights (Load Capability Key, Save Capability Key or Enter Capability List). In this way, object-oriented machine code is encapsulated as a function abstraction in private execution space. This PP250 is unlike the stretched von Neumann architecture. Instead, a lambda calculus meta-machine scales a 'single tape' Turing machine through a DNA network of 'Enter' capability keys representing functional nodes in a lambda calculus namespace It is a register-oriented architecture, with 8 program accessible data registers and 8 program accessible capability registers. Data registers are 24-bit; capability registers are 48-bit and contain the base address of the segment to which the capability refers, the size of the segment, and the access rights granted by the capability. Capabilities in memory are 24-bit and contain the access rights and an index into the System Capability Table for the segment to which the capability refers; entries in that table contain the segment base address and length for the segment to which the entry refers. Instructions that access memory have an opcode, a field specifying a data register operand, a field specifying a data register used as an index register containing an offset into a segment, a field specifying a capability register referring to the segment containing the memory location, and a field containing a base offset into the segment. The offset into the segment is the sum of the base offset and the contents of the index register. The software was modular based on the universal model of computation and the lambda calculus. Six Church instructions hide the details of a named function application using capability keys for the typed concepts of variables, functions, abstractions, applications and a namespace. Instead of binding instructions to static linear memory as a default shared privilege used by malware and hackers, instructions are bound to typed and protected, private digital objects using capability keys in a capability-based security system of immutable mathematical symbols. The resulting object oriented machine code achieved many decades of trusted software reliably as mathematically pure, industrial strength computer science. History Manufactured by Plessey company plc in the United Kingdom in 1970, it was successfully deployed by the Ministry of Defence for the British Army Ptarmigan project and served in the first Gulf War as a tactical mobile communication network switch. The PP250 was sold commercially circa 1972. See also Army Communications and Information Systems (United Kingdom) Flex machine References External links Photograph of the 250 Multiprocessor System (1975) Book on PP250 results British Army equipment Computers designed in the United Kingdom Capability systems History of computing in the United Kingdom Military computers
3260367
https://en.wikipedia.org/wiki/Bare%20Bones%20Software
Bare Bones Software
Bare Bones Software is a private North Chelmsford, Massachusetts, United States software company developing software tools for the Apple Macintosh platform. The company developed the BBEdit text editor, marketed under the registered trademark "It doesn't suck", and has been mentioned as a "top-tier Mac developer" by Mac OS X journalist John Siracusa. The company was founded in May 1993, and incorporated under the Commonwealth of Massachusetts in June 1994. Product list BBEdit Professional HTML and Text Editor. Yojimbo Information Organizer. Discontinued: BBEdit Lite Free "lightweight" Text Editor (replaced by TextWrangler). Mailsmith Email client (ownership transferred to Stickshift Software; became freeware). Super Get Info File and folder info utility for Mac OS X. (discontinued) TextWrangler Free, lightweight Text Editor which replaced BBEdit Lite (replaced by BBEdit). WeatherCal application that adds weather forecasts to iCal (discontinued on July 31, 2011). References External links 2006 E-mail interview with Bare Bones founder Rich Siegel Macintosh software companies Software companies based in Massachusetts Software companies of the United States
60994722
https://en.wikipedia.org/wiki/Police%20surveillance%20in%20New%20York%20City
Police surveillance in New York City
The New York City Police Department (NYPD) actively monitors public activity in New York City, New York, United States. Historically, surveillance has been used by the NYPD for a range of purposes, including against crime, counter-terrorism, and also for nefarious or controversial subjects such as monitoring political demonstrations, activities, and protests, and even entire ethnic and religious groups. Following the September 11 attacks, the NYPD developed a large and sophisticated array of surveillance technologies, generating large collections of data including billions of license plate readings and weeks of security footage from thousands of cameras. This data is now used in the department's day-to-day operations, from counter-terrorism investigations to resolving domestic violence complaints. The system consists of an interconnected web of CCTV cameras, license plate readers, physical sensors, machine learning software, data analytics dashboards, and mobile apps. Now centered around the Microsoft-built Domain Awareness System, the NYPD surveillance infrastructure has cost hundreds of millions of US dollars to produce and maintain. Many of its core components are being sold to or emulated by policing departments across the world. The NYPD has credited surveillance systems as preventing numerous terrorist attacks on the city and helping to provide evidence for hundreds of criminal cases. Unlike intelligence agencies such as the CIA, the NYPD does not disclose the budget or funding sources for its surveillance system. However this may change in December 2020 when POST act goes into effect. History The NYPD has been involved in numerous surveillance activities of NYC throughout its history. Black Panthers and the Handschu Agreement Between the 1950s and 1970s, the NYPD conducted extensive surveillance on the Black Panther Party, Nation of Islam, the Young Lords, suspected communists and other individuals of interest. The surveillance was conducted by the then anti-communist division known as "The Red Squad", which has since become the more general purpose Intelligence Bureau. The red squad surveilled and recorded information about targets day to day behaviors on index cards which were then stored at police headquarters. These routine behaviors included granular detail such as which table individuals were seated at during a fundraiser dinner, and how much they paid for the meal. In 2016, 520 boxes of notecards were discovered in a queens warehouse. The city records department has yet to release these boxes to the public but has stated it is working on rules to allow access. Since the 1980s the surveillance records from the red squad were required to be open to the public, allowing individuals to request copies of files written on them. However many requests for such files have been rejected, and the files themselves shuffled such that indexing of the documents is not possible. Civil rights attorneys and critics of the NYPD such as Jethro Eisenstein have argued that this difficulty in obtaining documents is the result of a concerted effort by the department to restrict public access to policing data. Prior to the discovery of the notecards, history professor Johanna Fernandez sued to have surveillance information on the Young Lords released and was told by the NYPD that no such documents existed, and that officials searched for over 100 hours for any pertinent files without success. In 1969, twenty one members of the Black Panthers were indicted on charges of planning bombings on police and civilian targets. The police were found to have been conducting comprehensive surveillance of the group, and after only ninety minutes all defendants were acquitted. The Handschu agreement was made following a class action lawsuit on the behalf of the defendants, which regulates the capacity for police surveillance. in 1971 a lawsuit was opened against the NYPD arguing that the surveillance data such as those collected by the red squad violated the Handschu agreements and the constitution, the lawsuit remains open to this day and police officials claim it will last forever. CompStat and the Dropping Crime Rate In the early 1990s, then deputy police commissioner Jack Maple designed and implemented the CompStat crime statistics system. According to an interview Jack Maple gave to Chris Mitchell, the system was designed to bring greater equity to policing in the city by attending to crimes which affected people of all socioeconomic backgrounds including previously ignored poor New Yorkers. Also at this time, the Dirty thirty (NYPD) police scandal was unfolding in Harlem, which involved illegal search and seizures of suspected and known drug dealers and their homes. Methods Domain awareness system and cameras The NYPD uses the Domain Awareness System (DAS) to perform surveillance across New York City - the system being composed of numerous physical and software components, including over 18,000 CCTV cameras around New York City. This surveillance is conducted on all people within range of sensors including people walking past security cameras or driving past license plate readers, it is not restricted to suspected criminals or terrorists. Additionally, individuals may be tracked across the country through nationwide sensor systems. Data collected from the DAS is retained for varying amounts of time depending on the type of data, some records being retained for a minimum of five years. In 2014, speed cameras were introduced into 20 school zones within the five boroughs. Since July 2019, 750 school zones have speed cameras operating between 6 A.M. and 10 P.M. The images in the DAS are used for facial recognition. The NYPD's privacy statement for the Domain Awareness System specifies that is does not use facial recognition. Surveillance of ethnic and religious groups In 2001 the NYPD established a secret program (known as the Demographics Unit, and later the Zone Assessment Unit) to surveil American Muslim daily life throughout New York City and the nation. The Demographics Unit was tasked with monitoring 28 "ancestries of interest, ranging from Arab ethnicities like Palestinian and Syrian to heavily Muslim populations from former Soviet states such as Chechnya and Uzbekistan to Black American Muslims". in August 2012, the Chief of the NYPD Intelligence Division, Lt. Paul Galati admitted during sworn testimony that in the six years of his tenure, the unit tasked with monitoring American Muslim life had not yielded a single criminal lead. The Moroccan Initiative was a similar ethnicity-based surveillance program of Moroccan Americans following the 2004 Madrid train bombings. The Intelligence Division units that were engaged in the NYPD’s Muslim surveillance program include its Demographics Unit, renamed the Zone Assessment Unit (now disbanded); the Intelligence Analysis Unit; the Cyber Intelligence Unit; and the Terrorist Interdiction Unit. In 2017, The Daily Beast reported that the NYPD continues to perform surveillance on the Muslim community in NYC. The New York Times reported in 2013 that CIA officers were embedded in the NYPD in the decade after September 11 attacks, and those officers were involved in nationwide surveillance operations. Undercover investigations of activists and protestors Judge Charles S. Haight ruled in 2003 that The NYPD issued a request to eliminate all rules related to the Handshu agreement, but that relaxed rules would be reasonable and comparable to similarly relaxed regulations adopted by FBI in 2002. He amended his ruling to add these relaxed rules into a court order stating that the "NYPD is in need of discipline, following interrogation of anti-war demonstrators by the NYPD where the activists were required to fill out a "Demonstration Debriefing Form" which asked about individual ideologies such as their opinions on Israel and their political affiliation which was then entered into a database. Following this judgement, police no longer required "specific information about criminal activity" to surveil a political or religious gathering, and instead only needed to show "A law enforcement purpose". Videotapes recorded between 2004 and 2005 revealed NYPD officers posing as activists to surveil at least seven Iraq War protest events, including wearing of political pins and participating in mock arrests by riot police. Prior to the 2004 Republican National Convention New York police broadly spied on people planning protests during the convention, and sent undercover officers nationwide to infiltrate various grass-roots political groups. Officials with the city and the Police Department defended their tactics, saying they needed to ferret out potential terrorists and protesters who intended to act violently or to commit vandalism. In January 2019, NYPD e-mail messages were released which indicated surveillance of activists within the Black lives matter protests. These e-mails included references to the activists being "idiots" and "ninjas". Following an internal investigation on compliance to the Handschu agreement in 2016, the NYPD inspector general found that in the cases reviewed both NYPD investigations and their use of informants and undercover officers continued after approval expired. Stop and Frisk The tactic of "Stop and Frisk" increased dramatically in 2008 to over half a million stops, 88% of which did not result in any fine or conviction, peaking in 2011 to 685,724 stops, again with 88% resulting in no conviction. On average, from 2002 to 2013, the number of individuals stopped without any convictions was 87.6%. Stop and frisk became the subject of a Racial profiling controversy, in part because of the large majority of people of color who were stopped - in 2011, 91% of stops were performed on Black and Latino New Yorkers. Mugshots from stop and frisk arrests are used by the facial recognition system used by the NYPD and personal data of those arrestees used to increase charges if arrested again, even when no charges were filed based on the initial stop. Cellphone tracking The NYPD has employed a number of cellphone surveillance strategies including Stingray phone trackers and cell tower dumps. In 2014 the International Business Times reported that the NYPD conducted cellphone tower dumps in response to unknown individuals raising a white flag on top of the Brooklyn Bridge. Tower dumps give lists of all phone numbers which were near a cell tower at a given time. According to the NYCLU the NYPD "disclosed the use of Stingrays more than 1,000 times between 2008 and May of 2015 without a written policy and following a practice of obtaining only lower-level court orders rather than warrants" without revealing "what models of Stingrays [are used] or how much taxpayer money is used to purchase and maintain them." In January 2019, a judge ordered that the NYPD must officially disclose if a Stingray phone tracker was used by the NYPD to monitor activists cellphone communications. Gang databases The NYPD operates a computer database of suspected gang members called the "criminal group database”, which in February 2019 contained over 42,000 individuals before shrinking to 18,000 in June 2019 where it now fluctuates due to thousands of names being added and removed. 1.1% of the individuals in the database are white, 66% are black, and 31.7% are latino. The Legal Aid Society created a website for individuals to conduct a FOIL request to see if they are in the database, the NYPD has rejected all FOIL requests on this subject. Being in this database affects treatment within the court system including the severity of charges leveled against a defendant. Many criteria can lead to an individual being identified as a gang member, including admitting to being in a gang; being identified by two reliable sources within a gang; being seen in a "known gang location" wearing either black, gold, yellow, red, purple, green, blue, white, brown, khaki, gray, orange, or lime green; or doing other unspecified activities identified by the NYPD as being gang-affiliated. The NYPD also monitors social media activity, and may add individuals to the database if they engage with social media posts connected to people who are already in the gang database. Facial recognition In 2012 then Mayor Michael Bloomberg claimed facial recognition "is something that's very close to being developed". Since 2011, NYPD has a dedicated unit of officers known as the facial identification section (or FIS) using facial recognition technology to compare images taken from DAS with photos like mug shots, juvenile arrest photos, and pistol permits. In the first five and half years of using the software, the NYPD arrested 2,878 individuals based on facial recognition matches. Police officials use photoshop tools such as blur and clone stamp, copy and paste different individuals lips and eyes onto photographs, merge multiple faces together, and use 3d modeling software to generate images of faces partially covered or turned away from the camera in order to generate find potentially matching images. When the system fails to generate a response, officials have been recorded passing celebrity look-a-likes through the database, such as in 2017 when a photograph of Woody Harrelson was used to generate image matches for a person stealing beer, or an unknown New York Knicks player (name redacted by the NYPD) used in pursuit of a man wanted for assault, which were both revealed after a two year court case between the NYPD and Georgetown University Law Center. Edited images cannot be processed accurately by facial recognition software as the software cannot distinguish between original and modified features, which could lead a detective to falsely assess a match as likely when the system incorrectly returns a high confidence score. The NYPD has attempted to use police sketches in facial recognition software in the past, but concluded they did not work. In 2015, the juvenile database was integrated into the same system, and facial recognition has been used to compare images from crime scenes with its collection of juvenile mug shots. The NYPD has been documented keeping photos in the juvenile database for several years, which leads to security footage comparison against potentially outdated images. As of August 2019, photos of 5,500 individuals used for comparison by the system were retained despite 4,100 of them no longer being considered a juvenile. Depending on the severity of a felony charge, the NYPD is allowed to take arrest photos of minors as young as 11. The NAACP Legal Defense and Educational Fund released a statement in August 2019 stating that the "It is well-documented that facial recognition technology routinely misidentifies darker skin, women, and young children. [...] This flawed technology puts the children of [Black, Latinx and Muslim people] and other targeted communities at grave risk. Deploying these experimental and biased practices to target children, especially without public or parental knowledge is wholly unacceptable." NYPD officials report that they "compare images from crime scenes to arrest photos in law enforcement records" and that they "do not engage in mass or random collection of facial records from NYPD camera systems, the internet, or social media.” In an interview with the verge, one NYPD detective states “No one has ever been arrested on the basis of a facial recognition match alone. As with any lead, further investigation is always needed to develop probable cause to arrest. The NYPD has been deliberate and responsible in its use of facial recognition technology." A Georgetown law report disagrees with this NYPD assessment, citing specific instances such as a case where an officer arrested a suspect after text messaging a witness “Is this the guy…?" with an attached algorithmically matched photograph. Another arrest was made after a lineup, where the arrested individual was placed into the lineup purely due to being selected by a facial recognition algorithm. In 2019 the MTA began testing the use of facial recognition. X-ray surveillance The NYPD has declined to describe how they employ their counter-terrorism X-Ray vans, stating that any information about past uses could give terrorists adequate knowledge to foil future missions, but probing from an investigative journalist from ProPublica has revealed potential public health concerns relating the backscatter X-ray technology. After a 3-year legal battle with ProPublica and a brief from the NYCLU, the NYPD is now required to share any documents detailing the health and safety risks associated with deploying the vans around the city. The NYCLU speculates that the vans cost between $729,000 to $825,000 each. Genetic sampling The NYPD maintains a database of genetic information, collected from individuals who have been convicted of a crime, arrested, or questioned by police. Genetic material from the database can be collected from objects touched by individuals being questioned such as cigarettes and cups. The database has 82,473 genetic profiles, growing 29% between 2017 and 2019. The legal aid society claims that 31,400 of those profiles came from individuals not convicted of a crime. In 2016, the NYPD collected hundreds of genetic samples from individuals in the region around Howard Beach, Queens in an attempt to solve the Murder of Karina Vetrano. Social media monitoring The NYPD follows social media accounts and stores posts of interest offline, retaining them for years after the initial collection. In one case the department collected suspected gang members' facebook posts for four years before making an arrest. Drones Between April and June 2014 the NYPD purchased 14 drones at a cost of $480,000 from Chinese based company DJI technology equipped with 4k resolution cameras. Of these drones, two were large ruggedized 'M210 RTK' quadcopters with thermal and 3d imaging technology, eleven were smaller 'Mavic Pro' quadcopters measuring less than two feet in diameter, and one was a 'DJI Inspire' quadcopter intended for training purposes. The drones are manned by two officers from the NYPD's Technical Assistance Response Unit, one who monitors the drone and the other who controls it. Policies on the use of drone technology include that it will not use facial recognition, and will only be deployed to monitor pedestrian and vehicle congestion and for security observation at shootings and large scale events. Additionally, the chief of police can approve drone use for public safety or emergency situations. Critics such as the NYCLU argue that the generality of the term 'large scale events' allows for drones to be used in a broad range of applications, including the monitoring of protestors and activists. They also argue that although the drones themselves cannot perform facial recognition as per the policy, the footage collected from drones can still be used for such purposes. As with other video collected by the NYPD, drone footage is retained for a minimum of 30 days upwards to an indefinite period if needed for unspecified legal purposes. Police officials state that the drones will not be used for unlawful surveillance, and will not be equipped with weapons. Further they state that it is negligent to not use drone technology. Governance and policy NYPD self-governance In April 2009, the NYPD released privacy guidelines for the domain awareness system's usage of data, specifying regulations such as data collection being utilized only in public areas and the DAS not using facial recognition. In 2012 Olivia J. Greer provided criticism of the NYPD privacy guidelines in the Michigan Telecommunications and Technology Law Review, stating that "the NYPD and the Guidelines make it clear that “flexibility” is required to protect the public, and are very unclear with regard to how much and what type of flexibility is required. [...] Of overarching concern is the fact that the Guidelines allow data to be used for indeterminate “legitimate law enforcement and public safety purposes” beyond the scope of the Statement of Purpose. This effectively means that data may be used for any purpose identified by the NYPD to relate to law enforcement, but not encompassed in the Guidelines. Perhaps the most troubling aspect of the Guidelines appears in the final paragraph: a disclaimer that states “[n]othing in these Guidelines is intended to create any private rights, privileges, benefits or causes of action in law or equity.” Thus, the Guidelines are not legally enforceable." City governance Local Law 49 In 2017 local law 49 was introduced to the New York City Council, which prompted the "creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems". Due to the broadness of this bill, it would include NYPD surveillance algorithms within its scope. However, as of June 2019, it has been reported that the taskforce struggled to gain the necessary information from government agencies in order to generate their recommendations. POST Act The POST (Public Oversight of Police Technology) act is a bill passed by the New York City Counsel on June 18, 2020, intending to create "comprehensive reporting and oversight of NYPD surveillance technologies". Although initially opposed by Mayor Bill de Blasio, in the wake of the 2020 George Floyd protests he signed off on the bill in a series of police reforms including displaying badge numbers and restrictions on the use of chokeholds. The POST act though enforcing the release of information regarding surveillance technology and personal data, does not restrict the usage or adopting of the technology itself. Originally proposed to the New York City council in February 2018 by Daniel Garodnick and coauthored by sixteen other counsel-members, the act was opposed by the NYPD because they "cannot support a law that seems to be designed to help criminals and terrorists thwart efforts to stop them and endanger brave officers". The POST act will go into effect 180 days (6 months) after passage - in December 2020. During initial discussions of the bill in 2017, Deputy Commissioner of Counterterrorism and Intelligence John Miller described the POST act as "insane", stating that "The way the bill is written right now it would be asking us to describe the specific manufacturer and model of an undercover recording device that an officer is wearing in an ongoing terror investigation". Chief of Detectives Robert K. Boyce also spoke against the bill, stating that it would give detailed information on the sex offender registry and domestic violence report system, which would then allow individuals to exploit or evade those systems. New York City Council Black, Latino and Asian (BLA) Caucus endorsed the POST act in 2018, stating that the bill "disclose[s] basic information about its surveillance and the safeguards in place to protect the privacy and civil liberties of New Yorkers" and that "BLAC's endorsement highlights the threat that unchecked surveillance particularly poses to communities of color and Muslim and Immigrant New Yorkers." The Brennan Center for Justice gave support for the bill, stating "None of the information required by the POST Act is granular enough to be of value to a potential terrorist or criminal [and] will not make surveillance tools any less effective." In their testimony to the New York City Council Committee on Public Safety, they officially supported the POST act, saying the goal of the act is to "have an informed conversation with policymakers and community stakeholders [...] before the NYPD deploys a new technology and before there is another alarming headline about police surveillance [and instead to have] up-front, constructive community input. It also encourages the NYPD to be thoughtful in how it approaches new surveillance technologies, so as not to engage in activities that harm individual rights, undermine its relationships with communities, or waste scarce resources". Comptroller Audit of DAS In 2013, the New York City Comptroller began an audit process of the Domain Awareness System, stating "The program does not have an outside monitor and it remains unclear whether it has ever been subjected to any internal audit to ensure access is protected and that it is not vulnerable to abuse or misuse." The audit concluded in 2015, and gave a report which included descriptions of how the NYPD was in compliance with regulations, and also points of concern in the system, including "individuals who were no longer NYPD employees whose DAS access had not been deactivated in the system" and that "Integrity Control Officers did not receive a standard set of criteria to use when reviewing DAS user activities and that the Integrity Control Officers had other responsibilities outside of the DAS system." State governance The New York State New York State Division of Criminal Justice Services published suggested guidelines for the usage of license plate readers within the state. This includes the recommendation of the creation of a "hot list" of license plates to be monitored, consisting of Gang members/associates, Sex offenders, Crime suspects, Fugitives and Search warrant targets. Licensing and exporting Several large corporations are involved in developing the technology which composes the current surveillance systems in NYC, these include but are not limited to IBM, Microsoft and Palantir. The Domain awareness system is licensed out by Microsoft to other cities, with New York City getting 30% of the profits. So far the system has been licensed by the Washington DC Metro Police, the Brazilian National Police, and the Singapore Police Force. See also Mass surveillance in China Mass surveillance in the United Kingdom Policing New York City Police Department corruption and misconduct References External links New York City Council Technology Committee meeting 2/12/2019 Contains video recording of meeting, which included discussion of POST act New York City Council Technology Committee meeting 4/4/2019 Contains video recording of meeting, which was to provide oversight of local law 49 ADS Taskforce public forums Contains video recordings and transcripts of meetings, where the public provided input to the task force created by local law 49 Law enforcement in the New York metropolitan area Surveillance Mass surveillance
217469
https://en.wikipedia.org/wiki/INT%20%28x86%20instruction%29
INT (x86 instruction)
INT is an assembly language instruction for x86 processors that generates a software interrupt. It takes the interrupt number formatted as a byte value. When written in assembly language, the instruction is written like this: INT X where X is the software interrupt that should be generated (0-255). As is customary with machine binary arithmetic, interrupt numbers are often written in hexadecimal form, which can be indicated with a prefix 0x or with the suffix h. For example, INT 13H will generate the 20th software interrupt (0x13 is the number 19 -- nineteen -- written in hexadecimal notation, and the count starts with 0), causing the function pointed to by the 20th vector in the interrupt table to be executed. Real mode When generating a software interrupt, the processor calls one of the 256 functions pointed to by the interrupt address table, which is located in the first 1024 bytes of memory while in real mode (see Interrupt vector). It is therefore entirely possible to use a far-call instruction to start the interrupt-function manually after pushing the flag register. One of the most useful DOS software interrupts was interrupt 0x21. By calling it with different parameters in the registers (mostly ah and al) you could access various IO operations, string output and more. Most Unix systems and derivatives do not use software interrupts, with the exception of interrupt 0x80, used to make system calls. This is accomplished by entering a 32-bit value corresponding to a kernel function into the EAX register of the processor and then executing INT 0x80. INT3 The INT3 instruction is a one-byte-instruction defined for use by debuggers to temporarily replace an instruction in a running program in order to set a code breakpoint. The more general INT XXh instructions are encoded using two bytes. This makes them unsuitable for use in patching instructions (which can be one byte long); see SIGTRAP. The opcode for INT3 is 0xCC, as opposed to the opcode for INT immediate8, which is 0xCD immediate8. Since the dedicated 0xCC opcode has some desired special properties for debugging, which are not shared by the normal two-byte opcode for an INT3, assemblers do not normally generate the generic 0xCD 0x03 opcode from mnemonics. INTO The INTO instruction is another one-byte-instruction. It is a conditional interrupt which is triggered when the overflow flag is set at the time of executing this opcode. This implicitly indicates interrupt #4. The opcode for INTO is 0xCE, however it is unavailable in x86-64 mode. See also INT 10H INT 13H DOS API Interrupt BIOS interrupt call Ralf Brown's Interrupt List References X86 instructions Interrupts
7483908
https://en.wikipedia.org/wiki/WIMATS
WIMATS
WIMATS is an application software to transcript mathematical and scientific text input into braille script in braille presses. Based on the Nemeth Code, the output can be printed in a variety of braille embossers. This transcription software was jointly developed by Webel Mediatronics Limited (WML) and International Council for Education of People with Visual Impairment (ICEVI), and was officially launched on 17 July 2006. WIMATS support inputs of arithmetic, algebra, geometry, trigonometry, calculus, vector, set notations and Greek alphabets. The graphical user interface is very user friendly, and training does not take a long time. This software fulfills a long felt need for the availability of Mathematics and Science study materials at the Higher Secondary and College level for the visually impaired persons. With the introduction of this software, visually impaired persons will no longer need to worry about the availability of books for mathematics and science courses. ICEVI has presence in 185 countries in the world and the software developed jointly with WML is made available to all these countries through ICEVI. References West Bengal Electronics Industry Development Corporation Limited Science software
32166425
https://en.wikipedia.org/wiki/TSheets
TSheets
TSheets is a web-based and mobile time tracking and employee scheduling app. The service runs in a web browser or on mobile phones. TSheets is an alternative to a paper timesheet or punch cards. History Based in Eagle, Idaho, TSheets was co-founded in 2006 by CEO Matt Rissell and CTO Brandon Zehm. In 2008, TSheets released a native employee time tracking app for the iPhone. In 2012, TSheets released an integration with accounting and payroll software QuickBooks. In 2015, TSheets accepted $15 million in growth equity funding from Summit Partners, bought a building in Eagle, Idaho, and opened a second location in Sydney, Australia. On 5 December 2017, Intuit announced an agreement to acquire TSheets. The transaction is valued at approximately $340 million of cash and other consideration and closed on 11 January 2018. After the transaction closed, Time Capture became a new business unit within Intuit’s Small Business and Self-Employed Group with Matt Rissell assuming the leader role reporting to Alex Chriss. TSheets’s Eagle, Idaho site became an Intuit location. TSheets began as a very basic web-based time tracking software and is now a mobile time tracking and scheduling software/app with GPS location tracking, seamless software integrations, and custom settings. TSheets has partnered and integrated with a number of HR, accounting, and payroll software. It also features an open API — allowing developers to merge TSheets with their existing applications and software (if no integration exists). Press TSheets has received press in the technology and business sectors, including an interview with CEO Matt Rissell in Fast Company’s WorkFast TV series, and mentions in Inc. and TechCrunch, republished in the Washington Post. TSheets has also appeared in Business 2 Community, Idaho Business Review, and the WSJ among several other news and online technology sources. See also Comparison of time-tracking software References External links Official site Time-tracking software Web applications Administrative software Meridian, Idaho Companies based in Idaho Proprietary software
31849
https://en.wikipedia.org/wiki/Telecommunications%20in%20Uruguay
Telecommunications in Uruguay
Telecommunications in Uruguay includes radio, television, telephones, and the Internet. Radio and television Uruguay has a mixture of privately owned and state-run broadcast media; more than 100 commercial radio stations and about 20 TV channels. Cable TV is readily available. Uruguay adopted the hybrid Japanese/Brazilian HDTV standard (ISDB-T) in December 2010. International call sign prefixes for radio and television stations: CV, CW, and CX Telephones Land lines: 1,165,673 lines in use, equivalent to 0.33 lines per capita (December 2019). Mobile cellular lines: 5,667,631 lines in use, equivalent to 1.64 lines per capita (December 2019). Domestic system: fully digitalized, most modern facilities concentrated in Montevideo; nationwide microwave radio relay network. Country calling code: 598. Submarine cable systems: UNISUR submarine cable system provides direct connectivity to Brazil and Argentina. Satellite earth stations: Two, Intelsat (Atlantic Ocean). Internet Internet users: 2.7- million (2020). Internet hosts: 1.036 million (2012). Average connection speed (Q4 2016): 8.2 Mbit/s (world average=7.0 Mbit/s) Top-level domain: .uy In Uruguay, one can access the Internet mainly by using: FTTH services, provided by the state-owned company (ANTEL), which covers most of Montevideo; and the rest of the 19 departments' most important cities. ADSL services, provided by the state-owned company (ANTEL). LTE 4G service with high speed connections, about 20 Mbit/s, offered by all the mobile phone companies. 3G mobile Internet, offered by all the mobile phone companies. Wireless ISPs, which have a tendency to be more expensive because of high taxation and radio spectrum license costs. WiMax launched by Dedicado in 2012. WiFi access provided at shopping malls, bus lines and most of commercial business. Fiber to the home In November 2010, ANTEL announced that it would start rolling out Fiber to the home (FTTH) in the second half of 2011. As of September 2017, 49% of the homes with Internet access do so via FTTH. As of January 2019 Antel offers the following fiber to the home plans: Hogar Básico: up to 60 Mbit/s down and 10 Mbit/s up for 1105 UYU (US$34). After 350 GB of monthly data consumption speed reduced to 3 Mbit/s down and 512 kbit/s up. Hogar Plus: up to 120 Mbit/s down and 12 Mbit/s up for 1,470 UYU (US$45). After 500 GB of monthly data consumption speed reduced to 6 Mbit/s down and 1 Mbit/s up. Hogar Premium: up to 240 Mbit/s down and 24 Mbit/s up for 2,100 UYU (US$65). After 700 GB of monthly data consumption speed reduced to 12 Mbit/s down and 1 Mbit/s up. Entretenimiento Plus: up to 300 Mbit/s down and 30 Mbit/s up for 2,500 UYU (US$77). After 1,000 GB of monthly data consumption speed reduced to 12 Mbit/s down and 1 Mbit/s up. All home consumer plans provide a dynamic IP address only. There are also business plans available with no monthly data consumption limit that provide fixed IP addresses. ADSL ANTEL is the only ISP to provide ADSL service since it enjoys a monopoly in the basic telephony area. Other ISP use other technologies, such as radio, to get to customers. The following are the plans marketed to home users by Antel as of May 2018. All plans require having a corresponding voice phone service with Antel. All prices include VAT. Consumer data plans Internet Básico: 3,072 kbit/s down and 512 kbit/s up for 955 UYU (US$31) a month. After 350Gb of data consumption speed reduced to 2,048 kbit/s down and 512 kbit/s up. Internet Plus: 5,120 kbit/s down and 512 kbit/s up for 1,265 UYU (US$41) a month. After 500Gb of data consumption speed reduced to 2,048 kbit/s down and 512 kbit/s up. Internet Premium: 10,240 kbit/s down and 512 kbit/s up for 1,700 UYU (US$55) a month. After 700Gb of data consumption speed reduced to 8,192 kbit/s down and 512 kbit/s up. All consumer plans provide a dynamic IP address. Fixed wireless Most of Uruguay's landmass is too far away from cities to have wired Internet access. For customers in these rural and low density suburban areas, fixed wireless ISPs provide a service. Wireless Internet service has also provided city Internet users with some degree of choice in a country where private companies have not been allowed to offer wired alternatives (e.g. cable TV Internet, fiber to the home) to the state-operated ADSL service. Dedicado is a local wireless ISP. It appeared before or about at the same time as Anteldata (about in 1999), but since ADSL was not available at the same time on every neighborhood, Dedicado had the majority of the permanent Internet connections. As of November 2007, ADSL is available in every neighborhood in Montevideo, and in most other cities, and Dedicado lost a big market share, both because being more expensive and giving bad service to their users. They started a big advertising campaign, but didn't pay attention to the technical details related to their number of users, so their quality of service decreased. As of 2012, their quality of service issues appear to be on the mend, but their pricing issues continue especially in the rural market where they have no credible competition and have steadily increased prices. Dedicado originally operated Ericsson fixed wireless equipment and later transitioned to Motorola Canopy and Cambium technology. In 2005, they started deploying WiMAX services. However, as of May 2010, the service is not offered nor advertised yet. There are other wireless ISPs, but Dedicado is the main one. Telmex is another entrant in the Uruguayan fixed wireless space. As of early 2012, they were still a tentative player however, with limited coverage of the country and some technical shortcomings (e.g. no Skype connectivity). In February 2012, Antel announced a push to provide fixed wireless Internet service to rural customers using their 3G cellular network. As of November 2012, the service was being actively offered to customers of the company's Ruralcel fixed wireless telephone service. Customers who sign up get the equipment (a ZTE MF612/MF32 or Huawei B660 3G router) and monthly Internet service for free. While the network and router are capable of supporting multi-Mbit/s service, the free offering is throttled back to 256 kb down/64 kb upload speeds and capped at 1 Gbyte of monthly data transfer (except for a small number of customers grandfathered from a previous service). Once that data limit is reached, the customer has to recharge the service using a prepaid card at a rate of approximately US$10/Gbyte. There is an alternative monthly billing plan that offers 2 Mbit/s down and 512 Mbit/s up with a 5 Gbyte data cap for US$15, plus US$10 for each additional Gbyte (up to 5 Gbyte). There is no unlimited data plan, which limits this technology's ability to compete in the non-residential fixed wireless space against vendors like Dedicado. Mobile wireless Internet access via cell phone networks is probably the most vibrant and competitive Internet marketplace in Uruguay. All the Uruguayan cell phone companies (Antel, Claro, Movistar) offer data plans for their smartphone users as well as USB modems for personal computers. Ancel/Antel even offers a bundle of cellular Internet access and ADSL, an unusual but potentially attractive combination for home ADSL users who also want to have Internet access on the go. The speeds delivered by all companies within their areas of coverage keep getting faster, and the areas of coverage keep expanding (as of 2012 Ancel probably still has the edge in % of the country's land covered). Vendors are shifting from 3G to 4G, starting in the area around Montevideo. From a consumer's standpoint, the only discouraging trend in this market is the adoption of data volume caps by all vendors. As of August 2012, no vendor web-site offered an unlimited mobile Internet data plan (the closest was an "unlimited during nights and weekends" from Claro). This means these offerings are unlikely to cross sell into the fixed wireless Internet market where unlimited data plans tend to be the rule. Internet Service Providers The main Internet Service Providers (ISPs) in Uruguay are: ANTEL (http://www.antel.com.uy) Claro (http://www.claro.com.uy) Dedicado (http://www.dedicado.com.uy) Movistar (http://www.movistar.com.uy) Cable Internet Despite a fully developed cable network in all mid- and large-size cities, there is no Internet access through cable TV systems in Uruguay as it has been steadfastly opposed by government regulators. Cuba is the only other country in the Americas missing this component of the Internet access ecosystem. Internet censorship and surveillance There are no government restrictions on access to or usage of the Internet or credible reports that the government monitors e-mail or Internet chat rooms without judicial oversight. Uruguayan law provides for freedom of speech and press, and the government generally respects these rights in practice. An independent press, an effective judiciary, and a functioning democratic political system combine to ensure these rights. The law also prohibits arbitrary interference with privacy, family, home, or correspondence, and the government generally respects these prohibitions in practice. In August 2016 the President of URSEC (the Uruguayan government agency equivalent to the FCC in the US) stated that his agency was at the government's beck and call to block the IP address of the servers of Uber to keep its app from operating in Uruguay. If carried out this would constitute the first and only known instance of Great Firewall style blocked-IP censorship in Uruguay. In the same interview he stated that WhatsApp "transgresses the limits of communications". In July 2017 the Uruguayan subsecretary of economy stated that the government was considering "blocking the signals" of online gaming sites, which in Internet terms would seem to refer to some kind of IP-based censorship. In April 2018 a Uruguayan court ordered all Uruguayan ISPs to block their users from accessing the content of specific sites broadcasting sports events copyrighted by Fox Sports Latin America. This is a key precedent that differs dramatically from the piracy enforcement in first world countries like the US, which focuses on the takedown of the sites themselves and does not engage in IP-based censorship. A Fox spokesman declared the network would try to use the precedent to get similar rulings in other Latin American countries. In November 2016 the Uruguayan Ministry of the Interior initiated legal action against a Facebook and Twitter site ("chorros_uy") that reports criminal activity across Uruguay, alleging that it "raises public alarm". The Interamerican Press Society swiftly criticized the Ministry's attempt to censor the site as "contrary to democracy's norms". As of 2017 a surveillance software suite, called "Guardián", capable of spying internet traffic, email accounts, social networks and telephone calls is being used without proper authorization from the Judiciary. References External links URSEC, Unidad Reguladora de Servicios de Comunicaciones (Regulatory Services Unit for Communications) . UY NIC, Regostro de Dominios UY (Registrar of Domains for UY) . A directory of Uruguayan blogs Uruguay Internet in Uruguay
41494988
https://en.wikipedia.org/wiki/Matthew%20D.%20Green
Matthew D. Green
Matthew Daniel Green (born 1976) is an American cryptographer and security technologist. Green is an Associate Professor of Computer Science at the Johns Hopkins Information Security Institute. He specializes in applied cryptography, privacy-enhanced information storage systems, anonymous cryptocurrencies, elliptic curve crypto-systems, and satellite television piracy. He is a member of the teams that developed the Zerocoin anonymous cryptocurrency and Zerocash. He has also been influential in the development of the Zcash system. He has been involved in the groups that exposed vulnerabilities in RSA BSAFE, Speedpass and E-ZPass. Education Green received a B.S. from Oberlin College (Computer Science), a B.M. from Oberlin College (Electronic Music), a Master's from Johns Hopkins University (Computer Science), and a PhD from Johns Hopkins University (Computer Science). His dissertation was titled "Cryptography for Secure and Private Databases: Enabling Practical Data Access without Compromising Privacy". Blog Green is the author of the blog, "A Few Thoughts on Cryptographic Engineering". In September 2013, a blog post by Green summarizing and speculating on NSA's programs to weaken cryptography, titled "On the NSA", was controversially taken down by Green's academic dean at Johns Hopkins for "contain[ing] a link or links to classified material and also [using] the NSA logo". As Ars Technica notes, this was "a strange request on its face", as this use of the NSA logo by Green was not "reasonably calculated to convey the impression that such use is approved, endorsed, or authorized by the National Security Agency", and linking classified information published by news organizations is legally entirely uncontroversial. The university later apologized to Green, and the blog post was restored (sans NSA logo), with a Johns Hopkins spokesman saying that "I'm not saying that there was a great deal of legal analysis done" as explanation for the legally unmotivated takedown. In addition to general blog posts about NSA, encryption, and security, Green's blog entries on NSA's backdoor in Dual_EC_DRBG, and RSA Security's usage of the backdoored cryptographically secure pseudorandom number generator (CSPRNG), have been widely cited in the mainstream news media. Work Green currently holds the position of Associate Professor at the Johns Hopkins Information Security Institute. He teaches courses pertaining to practical cryptography. Green is part of the group which developed Zerocoin, an anonymous cryptocurrency protocol. Zerocoin is a proposed extension to the Bitcoin protocol that would add anonymity to Bitcoin transactions. Zerocoin provides anonymity by the introduction of a separate zerocoin cryptocurrency that is stored in the Bitcoin block chain. Though originally proposed for use with the Bitcoin network, zerocoin could be integrated into any cryptocurrency. His research team has exposed flaws in more than one third of SSL/TLS encrypted web sites as well as vulnerabilities in encryption technologies, including RSA BSAFE, Exxon/Mobil Speedpass, E-ZPass, and automotive security systems. In 2015, Green was a member of the research team that identified the Logjam vulnerability in the TLS protocol. Green started his career in 1999 at AT&T Laboratories in Florham Park, New Jersey. At AT&T Labs he worked on a variety of projects including audio coding/secure content distribution, streaming video and wireless localization services. As a graduate student he co-founded Independent Security Evaluators (ISE) with two fellow students and Avi Rubin in 2005. Green served as CTO of ISE until his departure in 2011. Green is a member of the technical advisory board for the Linux Foundation Core Infrastructure Initiative, formed to address critical Internet security concerns in the wake of the Heartbleed security bug disclosed in April 2014 in the OpenSSL cryptography library. He sits on the technical advisory boards for CipherCloud, Overnest and Mozilla Cybersecurity Delphi. Green co-founded and serves on the Board for Directors of the Open Crypto Audit Project (OCAP), which undertook a security audit of the TrueCrypt software. References External links Matthew D. Green his personal page at Johns Hopkins University A Few Thoughts on Cryptographic Engineering his personal crypto blog CE' website his company page 1976 births Living people Oberlin College alumni Johns Hopkins University alumni Johns Hopkins University faculty Modern cryptographers
15183990
https://en.wikipedia.org/wiki/Abaqus
Abaqus
Abaqus FEA (formerly ABAQUS) is a software suite for finite element analysis and computer-aided engineering, originally released in 1978. The name and logo of this software are based on the abacus calculation tool. The Abaqus product suite consists of five core software products: Abaqus/CAE, or "Complete Abaqus Environment" (a backronym with a root in Computer-Aided Engineering). It is a software application used for both the modeling and analysis of mechanical components and assemblies (pre-processing) and visualizing the finite element analysis result. A subset of Abaqus/CAE including only the post-processing module can be launched independently in the Abaqus/Viewer product. Abaqus/Standard, a general-purpose Finite-Element analyzer that employs implicit integration scheme (traditional). Abaqus/Explicit, a special-purpose Finite-Element analyzer that employs explicit integration scheme to solve highly nonlinear systems with many complex contacts under transient loads. Abaqus/CFD, a Computational Fluid Dynamics software application which provides advanced computational fluid dynamics capabilities with extensive support for preprocessing and postprocessing provided in Abaqus/CAE. Abaqus/Electromagnetic, a Computational electromagnetics software application which solves advanced computational electromagnetic problems. The Abaqus products use the open-source scripting language Python for scripting and customization. Abaqus/CAE uses the fox-toolkit for GUI development. History The name of Abaqus was initially written as ABAQUS when it was first released. The early history of ABAQUS is very tightly connected with the early history of MARC Analysis Research Corporation. Dr. David Hibbitt, Dr. Bengt Karlsson, and Dr. Paul Sorensen co-founded the company later known as Hibbitt, Karlsson & Sorensen, Inc., (HKS) in Jan, 1978 to develop and market ABAQUS software. Hibbitt and Sorensen had met while completing their Ph.Ds at Brown University while Karlsson encountered the two in his capacity as a support analyst in a data centre in Stockholm. The original logo of ABAQUS company is a stylized abacus calculator, and its beads are set to the company's official launch date of February 1st, 1978 (2-1-1978). ABAQUS version 1 was created for a specific client -- Westinghouse Hanford Company which used the software to analyze nuclear fuel rod assemblies. ABAQUS version 3 was released in June 1979. In the early days, ABAQUS was designed primarily for the nonlinear static and dynamic analysis of structures, and nonlinear steady and transient analysis of heat transfer or conduction problems. It was initially distributed via CDC's Cybernet service. The first parallel version of ABAQUS, version 5.4, was made available to users in 1995. The core product, eventually known as ABAQUS/Standard which is an implicit finite element solver, was complemented by other software packages including ABAQUS/Explicit, a dynamic explicit analysis package released in 1992, and ABAQUS/CAE, a finite element pre- and post-processing package released in 1999. The first official release of ABAQUS/Explicit was hand-delivered to M.I.T. in 1992. Later on, the company's name was changed to ABAQUS, Inc. in late 2002 to reflect the company's focus on this product line. Then, in October 2005, the company with its 525 employees was acquired by Dassault Systèmes for $413 million or about four times the company's annual revenue of approximately $100 million. After that, ABAQUS, Inc. became part of Dassault Systèmes Simulia Corp. Dr. David Hibbitt was still with the company he co-founded as chairman while Mark Goldstein was president and CEO when the company was acquired by Dassault Systèmes. After 23 years of leadership, David Hibbitt retired in 2001; Bengt Karlsson and Paul Sorensen followed suit in the following year. All three are still living in New England. The headquarters of the company was located in Providence, Rhode Island until 2014. Since 2014, the headquarters of the company are located in Johnston, Rhode Island, United States. Release history The first version of ABAQUS was delivered/released in Sept. 1978. Version 3 of ABAQUS was released in June 1979. The first official release of ABAQUS/Explicit was in 1992. Version 0 of ABAQUS/Viewer was released as a standalone product in 1998. The same features were made available as the Visualization module of ABAQUS/CAE when it was first released in 1999. In recent years, a new version of Abaqus has been released near the end of every year. Applications Abaqus is used in the automotive, aerospace, and industrial products industries. The product is popular with non-academic and research institutions in engineering due to the wide material modeling capability, and the program's ability to be customized, for example, users can define their own material models so that new materials could also be simulated in Abaqus. Abaqus also provides a good collection of multiphysics capabilities, such as coupled acoustic-structural, piezoelectric, and structural-pore capabilities, making it attractive for production-level simulations where multiple fields need to be coupled. Abaqus was initially designed to address non-linear physical behavior; as a result, the package has an extensive range of material models such as elastomeric (rubberlike) and hyperelastic (soft tissue) material capabilities. Here are some animated examples Solution Sequence Every complete finite-element analysis consists of 3 separate stages: Pre-processing or modeling: This stage involves creating an input file which contains an engineer's design for a finite-element analyzer (also called "solver"). Processing or finite element analysis: This stage produces an output visual file. Post-processing or generating report, image, animation, etc. from the output file: This stage is a visual rendering stage. Abaqus/CAE is capable of pre-processing, post-processing, and monitoring the processing stage of the solver; however, the first stage can also be done by other compatible CAD software, or even a text editor. Abaqus/Standard, Abaqus/Explicit or Abaqus/CFD are capable of accomplishing the processing stage. Dassault Systemes also produces Abaqus for CATIA for adding advanced processing and post processing stages to a pre-processor like CATIA. Solvers Comparison The following is a comparison between the solver capabilities of Abaqus/Standard and Abaqus/Explicit. Notes The more complex the contacts become, the more repetitive calculations ABAQUS/Standard has to solve, and the more time and disk space needed; ABAQUS Explicit is the optimal choice in this case Like static elements (see the picture,) dynamic elements, thermal elements and electrical elements Steady, Static and Constant loads are the same. Transient loads include: quasi-static loads (slowly varying loads in which the effect of inertial is small enough to neglect) and dynamic loads (faster varying loads). See also ABAQUS, Inc List of finite element software packages Dassault Systèmes References External links Dassault Systèmes Finite element software Finite element software for Linux 1978 software
612664
https://en.wikipedia.org/wiki/Idomeneo
Idomeneo
(Italian for Idomeneus, King of Crete, or, Ilia and Idamante; usually referred to simply as Idomeneo, K. 366) is an Italian language opera seria by Wolfgang Amadeus Mozart. The libretto was adapted by Giambattista Varesco from a French text by Antoine Danchet, based on a 1705 play by Crébillion père, which had been set to music by André Campra as Idoménée in 1712. Mozart and Varesco were commissioned in 1780 by Karl Theodor, Elector of Bavaria for a court carnival. He probably chose the subject, though it may have been Mozart. The work premiered on 29 January 1781 at the Cuvilliés Theatre in Munich, Germany. It is now considered one of the greatest operas of all time. Composition The libretto clearly draws inspiration from Metastasio in its overall layout, the type of character development, and the highly poetic language used in the various numbers and the secco and stromentato recitatives. The style of the choruses, marches, and ballets is very French, and the shipwreck scene towards the end of act I is almost identical to the structure and dramatic working-out of a similar scene in Gluck's Iphigénie en Tauride. The sacrifice and oracle scenes are similar to Gluck's Iphigénie en Aulide and Alceste. Kurt Kramer has suggested that Varesco was familiar with Calzabigi and therefore the work of Gluck, especially the latter's Alceste; much of what we see in Varesco's most dramatic passages is the latest French style, mediated by Calzabigi. It is thanks to Mozart, though, that this mixture of French styles (apart from a few choruses) moves away from Gluck and France and returns to its more Italian (opera seria) roots; the singers were all trained in the classical Italian style, after all, and the recitatives are all classically Italian. Ballet music, K. 367 As per French tradition, the opera uses ballet to its advantage. Mozart wrote one to be performed in the opera (K. 367) which he considered more of a Lullian divertissement. It is in several parts and lasts around fifteen minutes. The structure is as follows. The first four dances transition into each other, while the last three are separate. Chaconne (Allegro) in D major, Annonce (Larghetto) in B-flat major, Chaconne qui reprenda (Allegro, starts in D minor, then goes to D major), Pas seul (Largo) in D major — Allegretto (attacca), — Piu Allegro (attacca), — Piu allegro (attacca), Passepied in B-flat, Gavotte in G, Passacaille in E-flat major, (unfinished) The ballet is scored for the opera's full orchestra of flutes, oboes, clarinets (only present in the passacaille), bassoons, horns, trumpets, timpani, and strings. It showcases the many famous techniques of the Mannheim orchestra, including stupefying crescendos and tutti passages. Throughout the manuscript, Mozart wrote the dancers who would be partaking in a specific section of the ballet; id est, "Pas seul de Mad. Falgera", or "Pas seul de Mr. [Jean-Pierre] Le Grand", who was the main choreographer for the opera's Munich premiere. It is unclear exactly where Mozart had intended the divertissement to occur in the opera. In the Neue Mozart-Ausgabe, Harald Heckmann suggests that it was performed after the first act, but Daniel Heartz states that it must have been performed after the final chorus of the third and last act, citing the majesty and pomposity of the D major Pas seul, perfect for concluding an opera. Along with the Chaconne up to the Pas seul, which form a consistent whole via attacca transitions, the manuscript is bound with three other dances; a passepied in B-flat, a gavotte in G (which is rather famous; Tchaikovsky conducted it at one of his Russian Musical Society concerts), and an unfinished passacaille in E-flat. Due to the separation from the first half of the ballet, as well as the incomplete status of them, Daniel Heartz speculates that they were simply never performed. Confirming his influence from Gluck, Mozart begins the Chaconne with a musical quotation from Gluck's own ballet from Orfeo ed Euridice. Performance history It was first performed at the Cuvilliés Theatre of the Munich Residenz on 29 January 1781, under the musical direction of Christian Cannabich. The opera apparently owed much of its success at its first performance to the set designs: a notice in the Munich press did not mention Mozart by name but said (translation): "The author, composer and translator are all natives of Salzburg; the decors, among which the view of the port and Neptune's temple were outstanding, were masterpieces by our renowned theatre designer, court Councillor Lorenzo Quaglio, and everyone admired them tremendously." Idomeneo was Mozart's first mature opera, in which he – unique among composers – for the first time and continuing this for all his subsequent operas, ended the work in the key of the overture. With Idomeneo, he demonstrated a mastery of orchestral color, accompanied recitatives, and melodic line. Mozart fought with the librettist, the court chaplain Varesco, making large cuts and changes, even down to specific words and vowels disliked by the singers (too many "i"s in "rinvigorir", which in Italian are pronounced as in bee). Idomeneo was performed three times in Munich. Later in 1781 Mozart considered (but did not put into effect) revisions that would have brought the work closer into line with Gluck's style; this would have meant a bass Idomeneo and a tenor Idamante. A concert performance was given in 1786 at the Palais Auersperg in Vienna. For this, Mozart wrote some new music, made some cuts, and changed Idamante from a castrato to a tenor. The British premiere was given by the amateur Glasgow Grand Opera Society in 1934. The first performance in the United States was produced by Boris Goldovsky at the Berkshire Music Festival at Tanglewood during the summer of 1947. Today Idomeneo is part of the standard operatic repertoire. There are several recordings of it (see below), and it is regularly performed. In 2006 there was a controversy over the cancelling of a 2003 production directed by Hans Neuenfels at the Deutsche Oper Berlin (see 2006 Idomeneo controversy). Richard Strauss version The approach of the 150th anniversary of Idomeneo'''s premiere placed some major European opera houses in a quandary: commemorative performances of so magnificent and historically important a score seemed obligatory, but, at the same time, how dared they mount an opera that 1930/31 audiences were bound to reject as hopelessly unstageworthy? The solution hit on in Munich and Vienna was to have Idomeneo adapted for modern tastes, but to show due reverence to Mozart's genius by entrusting the adaptations to famous twentieth-century opera composers with impeccable credentials as Mozarteans. Thus Munich commissioned an Idomeneo revision from Ermanno Wolf-Ferrari, performed in 1931, and the same year the Vienna State Opera presented a distinctively interventionist version of the score by Richard Strauss. For his adaptation of Idomeneo, Strauss employed a German libretto by Lothar Wallerstein that was partially a translation of the original Italian libretto, but with some changes to reflect the rearranging of the music. Strauss replaced about 1/3 of Mozart's score with some of his own music (even introducing the "Fall of Troy" motif from his own 1928 opera Die ägyptische Helena), and rearranged much of the music left behind. For example, Ilia's opening aria "" is mostly intact with a few changes to the long introductory recitative, but when Idamante (specifically written to be sung by a tenor in this version) enters, he sings Mozart's Non piu, tutto ascoltai, K490 (which was added to Mozart's original revision of Idomeneo) instead of "". A few major changes to the plot were made as well, such as changing princess Elettra to priestess Ismene. Critics have noted that Strauss's additions contain an interesting blend of the classical style of composition and Strauss's own characteristic sound. In 1984, New York's Mostly Mozart Festival presented Strauss's version with Jerry Hadley in the title role, Delores Ziegler as Idamante, and Alessandra Marc as Ismene. Roles Instrumentation The instrumentation is: Woodwinds: 2 flutes, piccolo (only in the storm of act 2), 2 oboes, 2 clarinets, 2 bassoons. B clarinets (an instrument that is now obsolete) are used in No. 15 (pp. 283ff in NMA) and No. 19 (pp. 352ff). Brass: 4 horns (in D, in C, in B-flat (alto)/in B (), in G), 2 trumpets in D, 3 trombones (only accompanying the off-stage voice of Neptune in act 3) Percussion: timpani in D and in A Basso continuo in secco recitatives of harpsichord and violoncello (period performance practice often uses a fortepiano only) Strings Synopsis Overture The overture, in D major and common time, is in a modified sonata form in which the development is but a very short transition section connecting the exposition with the recapitulation. Other conventional hallmarks of the sonata form are apparent: the exposition modulates from the tonic (D major) to the dominant (A major), while the recapitulation is centred on the tonic. The overture concludes with a coda ending in D major chords. These chords, soft and tentative, turn out not to be a resolution of the overture in the tonic but chords in the dominant of G minor, which is the home key of the scene that immediately follows. Act 1 Island of Crete, shortly after the Trojan War. Ilia, daughter of the defeated Trojan King Priam, has been taken to Crete after the war. She loves Prince Idamante, son of the Cretan King Idomeneo, but hesitates to acknowledge her love. Idamante frees the Trojan prisoners in a gesture of good will. He tells Ilia, who is rejecting his love, that it is not his fault that their fathers were enemies. Trojans and Cretans together welcome the return of peace, but Electra, daughter of the Greek King Agamemnon, is jealous of Ilia and does not approve of Idamante's clemency toward the enemy prisoners. Arbace, the king's confidant, brings news that Idomeneo has been lost at sea while returning to Crete from Troy. Electra, fearing that Ilia, a Trojan, will soon become Queen of Crete, feels the furies of the underworld rise up in her heart (aria: "Tutte nel cor vi sento, furie del crudo averno" – "I feel you all in my heart, furies of the cruel underworld"). Idomeneo is saved by Neptune (god of the sea) and is washed up on a Cretan beach. There he recalls the vow he made to Neptune: to sacrifice, if he should arrive safely on land, the first living creature he should meet. Idamante approaches him, but because the two have not seen each other for a long time, recognition is difficult. When Idomeneo finally realizes the youth that he must sacrifice for the sake of his vow is his own child, he orders Idamante never to seek him out again. Grief-stricken by his father's rejection, Idamante runs off. Cretan troops disembarking from Idomeneo's ship are met by their wives, and all praise Neptune. Act 2 At the king's palace, Idomeneo seeks counsel from Arbace, who says another victim could be sacrificed if Idamante were sent into exile. Idomeneo orders his son to escort Electra to her home, Argos. Idomeneo's kind words to Ilia move her to declare that since she has lost everything, he will be her father and Crete her country. As she leaves, Idomeneo realizes that sending Idamante into exile has cost Ilia her happiness as well as his own. Electra welcomes the idea of going to Argos with Idamante. At the port of Sidon (a fictional city of Crete), Idomeneo bids his son farewell and urges him to learn the art of ruling while he is away. Before the ship can sail, however, a storm breaks out, and a sea serpent appears. Recognizing it as a messenger from Neptune, the king offers himself as atonement for having violated his vow to the god. Act 3 In the royal garden, Ilia asks the breezes to carry her love to Idamante, who appears, explaining that he must go to fight the serpent. When he says he would rather die than suffer the torments of his rejected love, Ilia confesses her love. They are surprised by Electra and Idomeneo. When Idamante asks his father why he sends him away, Idomeneo can only reply that the youth must leave. Ilia asks for consolation from Electra, who is preoccupied with revenge. Arbace comes with news that the people, led by the High Priest of Neptune, are clamoring for Idomeneo. The High Priest tells the king of the destruction caused by Neptune's monster, urging Idomeneo to reveal the name of the person whose sacrifice is demanded by the god. When the king confesses that his own son is the victim, the populace is horrified. Outside the temple, the king and High Priest join Neptune's priests in prayer that the god may be appeased. Arbace brings news that Idamante has killed the monster. As Idomeneo fears new reprisals from Neptune, Idamante enters in sacrificial robes, saying he understands his father's torment and is ready to die. After an agonizing farewell, Idomeneo is about to sacrifice his son when Ilia intervenes, offering her own life instead. The Voice of Neptune is heard. Idomeneo must yield the throne to Ilia and Idamante. Everyone is relieved except Electra, who longs for her own death. Idomeneo presents Idamante and his bride as the new rulers. The people call upon the god of love and marriage to bless the royal pair and bring peace. List of ariasAct 1 "Padre, germani, addio" ("Father, brothers, farewell"), Ilia "Non ho colpa" ("I am not guilty"), Idamante "Tutte nel cor vi sento furie del cupo averno" ("I can feel you all in my heart, furies of the dark hell"), Electra "Vedrommi intorno" ("I shall see around me"), Idomeneo "Il padre adorato" ("My beloved father"), IdamanteAct 2 "Se il tuo duol" ("If your pain"), Arbace "Se il padre perdei" ("If I lost my father"), Ilia "Fuor del mar" ("Out of the sea"), Idomeneo "Idol mio" ("My sweetheart"), ElectraAct 3 "Zeffiretti lusinghieri" ("Zephyrs caressing"), Ilia "Se colà ne' fati è scritto" ("If it is written in the destiny"), Arbace "No, la morte io non pavento" ("No, I am not afraid of dying"), Idamante "D'Oreste, d'Ajace ho in seno i tormenti" ("I feel Orestes's and Ajax's torments in my heart"), Electra "Torna la pace" ("Peace comes again"), Idomeneo Recordings Audio Video Benjamin Britten (1969, distribution, 2008). BBC. With Peter Pears, Heather Harper, Rae Woodland, Anne Pashley, English Opera Group et al., OCLC 676295503 John Pritchard (1974) Glyndebourne Chorus. With Richard Lewis, Bozena Betley, Josephine Barstow, Leo Goeke, Alexander Oliver. James Levine (1982). Deutsche Grammophon. With Luciano Pavarotti, Frederica von Stade, Ileana Cotrubaș, Hildegard Behrens and John Alexander. This was the Metropolitan Opera's first production of the work. See details: Idomeneo (Luciano Pavarotti recording). Bernard Haitink (1983). NVC Arts. Staged by Trevor Nunn. Starring: Philip Langridge, Jerry Hadley (singing the tenor version of Idamante), Yvonne Kenny, Carol Vaness. Filmed in Glyndebourne, studio condition. (2004). Dynamic. Staged by Pier Luigi Pizzi. Starring: Kurt Streit; Sonia Ganassi; Angeles Blancas Gulin; Iano Tamar; Jörg Schneider; Dario Magnabosco; Deyan Vatchkov; Antonietta Bellon; Lucia Gaeta; Carmine Durante, Carmine Mennella. Orchestra and Chorus of Teatro di San Carlo, Naples. Daniel Harding (2005). RAI Trade. Camilla Tilling, Emma Bell, Monica Bacelli, Steve Davislim, Francesco Meli. Staged by Luc Bondy. Live from La Scala. Sir Roger Norrington (2006). Decca. Part of the M22 Project from the Salzburg Festival. Staged by Ursel and Karl-Ernt Herrmann at House for Mozart''. With Ramón Vargas, Magdalena Kožená, Anja Harteros, Ekaterina Siurina, Jeffrey Francis. Kent Nagano (2008). Medici Arts. Staged by Dieter Dorn. With John Mark Ainsley, Pavol Breslik (singing the tenor version of Idamante), Juliane Banse, Annette Dasch, Rainer Trost, Guy de Mey, Steven Humes. Live from the Bavarian State Opera. Nikolaus Harnoncourt (2009). styriarte Festival Edition. Conducted by Nikolaus, staged by Philipp Harnoncourt. With Saimir Pirgu; Julia Kleiter; Marie-Claude Chappuis; Eva Mei; Arnold Schoenberg Choir; Concentus Musicus Wien. Includes all original ballet scenes. James Levine (2017). Metropolitan Opera with Matthew Polenzani, Alice Coote, Nadine Sierra, and Elza van den Heever. Live from the Metropolitan Opera House. Ivor Bolton (2020) with Eric Cutler, David Portillo,Annett Fritsch, Eleonora Buratto, Orchestra and chorus of the Teatro Real. DVD:Opus Arte Cat:OA1317D See also List of operas by Mozart 2006 Idomeneo controversy References External links (Daniel Heartz, Bruce Alan Brown) Libretto, critical editions, diplomatic editions, source evaluation (German only), links to online DME recordings – Digital Mozart Edition Synopsis, Metropolitan Opera San Diego OperaTalk! with Nick Reveles: Mozart's Idomeneo (Adobe Flash) Libretto Italian libretto, 1781 Italian libretto, 1803 Italian-language operas Opera seria Operas by Wolfgang Amadeus Mozart 1781 operas Operas Operas set in fictional, mythological and folkloric settings Operas set in ancient Greece Cultural depictions of the Trojan War
4594
https://en.wikipedia.org/wiki/Block%20cipher
Block cipher
In cryptography, a block cipher is a deterministic algorithm operating on fixed-length groups of bits, called blocks. They are specified elementary components in the design of many cryptographic protocols and are widely used to the encryption of large amounts of data, including data exchange protocols. It uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Definition A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapping on . The inverse for is defined as a function taking a key and a ciphertext to return a plaintext value , such that For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plain text. For each key K, EK is a permutation (a bijective mapping) over the set of input blocks. Each key selects one permutation from the set of possible permutations. History The modern design of block ciphers is based on the concept of an iterated product cipher. In his seminal 1949 publication, Communication Theory of Secrecy Systems, Claude Shannon analyzed product ciphers and suggested them as a means of effectively improving security by combining simple operations such as substitutions and permutations. Iterated product ciphers carry out encryption in multiple rounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers, named a Feistel network after Horst Feistel, is notably implemented in the DES cipher. Many other realizations of block ciphers, such as the AES, are classified as substitution–permutation networks. The root of all cryptographic block formats used within the Payment Card Industry Data Security Standard (PCI DSS) and American National Standards Institute (ANSI) standards lies with the Atalla Key Block (AKB), which was a key innovation of the Atalla Box, the first hardware security module (HSM). It was developed in 1972 by Mohamed M. Atalla, founder of Atalla Corporation (now Utimaco Atalla), and released in 1973. The AKB was a key block, which is required to securely interchange symmetric keys or PINs with other actors of the banking industry. This secure interchange is performed using the AKB format. The Atalla Box protected over 90% of all ATM networks in operation as of 1998, and Atalla products still secure the majority of the world's ATM transactions as of 2014. The publication of the DES cipher by the United States National Bureau of Standards (subsequently the U.S. National Institute of Standards and Technology, NIST) in 1977 was fundamental in the public understanding of modern block cipher design. It also influenced the academic development of cryptanalytic attacks. Both differential and linear cryptanalysis arose out of studies on the DES design. there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust against brute-force attacks. Design Iterated block ciphers Most block cipher algorithms are classified as iterated block ciphers which means that they transform fixed-size blocks of plaintext into identically sized blocks of ciphertext, via the repeated application of an invertible transformation known as the round function, with each iteration referred to as a round. Usually, the round function R takes different round keys Ki as second input, which are derived from the original key: where is the plaintext and the ciphertext, with r being the number of rounds. Frequently, key whitening is used in addition to this. At the beginning and the end, the data is modified with key material (often with XOR, but simple arithmetic operations like adding and subtracting are also used): Given one of the standard iterated block cipher design schemes, it is fairly easy to construct a block cipher that is cryptographically secure, simply by using a large number of rounds. However, this will make the cipher inefficient. Thus, efficiency is the most important additional design criterion for professional ciphers. Further, a good block cipher is designed to avoid side-channel attacks, such as branch prediction and input-dependent memory accesses that might leak secret data via the cache state or the execution time. In addition, the cipher should be concise, for small hardware and software implementations. Finally, the cipher should be easily cryptanalyzable, such that it can be shown how many rounds the cipher needs to be reduced to, so that the existing cryptographic attacks would work – and, conversely, that it can be shown that the number of actual rounds is large enough to protect against them. Substitution–permutation networks One important type of iterated block cipher known as a substitution–permutation network (SPN) takes a block of the plaintext and the key as inputs, and applies several alternating rounds consisting of a substitution stage followed by a permutation stage—to produce each block of ciphertext output. The non-linear substitution stage mixes the key bits with those of the plaintext, creating Shannon's confusion. The linear permutation stage then dissipates redundancies, creating diffusion. A substitution box (S-box) substitutes a small block of input bits with another block of output bits. This substitution must be one-to-one, to ensure invertibility (hence decryption). A secure S-box will have the property that changing one input bit will change about half of the output bits on average, exhibiting what is known as the avalanche effect—i.e. it has the property that each output bit will depend on every input bit. A permutation box (P-box) is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible. At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typically XOR. Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order). Feistel ciphers In a Feistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. The round function is applied to one half, using a subkey, and then the output is XORed with the other half. The two halves are then swapped. Let be the round function and let be the sub-keys for the rounds respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces, (, ) For each round , compute . Then the ciphertext is . Decryption of a ciphertext is accomplished by computing for . Then is the plaintext again. One advantage of the Feistel model compared to a substitution–permutation network is that the round function does not have to be invertible. Lai–Massey ciphers The Lai–Massey scheme offers security properties similar to those of the Feistel structure. It also shares its advantage that the round function does not have to be invertible. Another similarity is that it also splits the input block into two equal pieces. However, the round function is applied to the difference between the two, and the result is then added to both half blocks. Let be the round function and a half-round function and let be the sub-keys for the rounds respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces, (, ) For each round , compute where and Then the ciphertext is . Decryption of a ciphertext is accomplished by computing for where and Then is the plaintext again. Operations ARX (add–rotate–XOR) Many modern block ciphers and hashes are ARX algorithms—their round function involves only three operations: (A) modular addition, (R) rotation with fixed rotation amounts, and (X) XOR. Examples include ChaCha20, Speck, XXTEA, and BLAKE. Many authors draw an ARX network, a kind of data flow diagram, to illustrate such a round function. These ARX operations are popular because they are relatively fast and cheap in hardware and software, their implementation can be made extremely simple, and also because they run in constant time, and therefore are immune to timing attacks. The rotational cryptanalysis technique attempts to attack such round functions. Other operations Other operations often used in block ciphers include data-dependent rotations as in RC5 and RC6, a substitution box implemented as a lookup table as in Data Encryption Standard and Advanced Encryption Standard, a permutation box, and multiplication as in IDEA. Modes of operation A block cipher by itself allows encryption only of a single data block of the cipher's block length. For a variable-length message, the data must first be partitioned into separate cipher blocks. In the simplest case, known as electronic codebook (ECB) mode, a message is first split into separate blocks of the cipher's block size (possibly extending the last block with padding bits), and then each block is encrypted and decrypted independently. However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output. To overcome this limitation, several so called block cipher modes of operation have been designed and specified in national recommendations such as NIST 800-38A and BSI TR-02102 and international standards such as ISO/IEC 10116. The general concept is to use randomization of the plaintext data based on an additional input value, frequently called an initialization vector, to create what is termed probabilistic encryption. In the popular cipher block chaining (CBC) mode, for encryption to be secure the initialization vector passed along with the plaintext message must be a random or pseudo-random value, which is added in an exclusive-or manner to the first plaintext block before it is being encrypted. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. In the cipher feedback (CFB) mode, which emulates a self-synchronizing stream cipher, the initialization vector is first encrypted and then added to the plaintext block. The output feedback (OFB) mode repeatedly encrypts the initialization vector to create a key stream for the emulation of a synchronous stream cipher. The newer counter (CTR) mode similarly creates a key stream, but has the advantage of only needing unique and not (pseudo-)random values as initialization vectors; the needed randomness is derived internally by using the initialization vector as a block counter and encrypting this counter for each block. From a security-theoretic point of view, modes of operation must provide what is known as semantic security. Informally, it means that given some ciphertext under an unknown key one cannot practically derive any information from the ciphertext (other than the length of the message) over what one would have known without seeing the ciphertext. It has been shown that all of the modes discussed above, with the exception of the ECB mode, provide this property under so-called chosen plaintext attacks. Padding Some modes such as the CBC mode only operate on complete plaintext blocks. Simply extending the last block of a message with zero-bits is insufficient since it does not allow a receiver to easily distinguish messages that differ only in the amount of padding bits. More importantly, such a simple solution gives rise to very efficient padding oracle attacks. A suitable padding scheme is therefore needed to extend the last plaintext block to the cipher's block size. While many popular schemes described in standards and in the literature have been shown to be vulnerable to padding oracle attacks, a solution which adds a one-bit and then extends the last block with zero-bits, standardized as "padding method 2" in ISO/IEC 9797-1, has been proven secure against these attacks. Cryptanalysis Brute-force attacks This property results in the cipher's security degrading quadratically, and needs to be taken into account when selecting a block size. There is a trade-off though as large block sizes can result in the algorithm becoming inefficient to operate. Earlier block ciphers such as the DES have typically selected a 64-bit block size, while newer designs such as the AES support block sizes of 128 bits or more, with some ciphers supporting a range of different block sizes. Differential cryptanalysis Linear cryptanalysis Linear cryptanalysis is a form of cryptanalysis based on finding affine approximations to the action of a cipher. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis. The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992). Integral cryptanalysis Integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus. Other techniques In addition to linear and differential cryptanalysis, there is a growing catalog of attacks: truncated differential cryptanalysis, partial differential cryptanalysis, integral cryptanalysis, which encompasses square and integral attacks, slide attacks, boomerang attacks, the XSL attack, impossible differential cryptanalysis and algebraic attacks. For a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks. Provable security When a block cipher is used in a given mode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ECB (discussed above) emphatically lacks this property: regardless of how secure the underlying block cipher is, ECB mode can easily be attacked. On the other hand, CBC mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. Note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to "be secure". This section describes two common notions for what properties a block cipher should have. Each corresponds to a mathematical model that can be used to prove properties of higher level algorithms, such as CBC. This general approach to cryptography – proving higher-level algorithms (such as CBC) are secure under explicitly stated assumptions regarding their components (such as a block cipher) – is known as provable security. Standard model Informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher (equipped with a random key) and a random permutation. To be a bit more precise, let E be an n-bit block cipher. We imagine the following game: The person running the game flips a coin. If the coin lands on heads, he chooses a random key K and defines the function f = EK. If the coin lands on tails, he chooses a random permutation on the set of n-bit strings, and defines the function f = . The attacker chooses an n-bit string X, and the person running the game tells him the value of f(X). Step 2 is repeated a total of q times. (Each of these q interactions is a query.) The attacker guesses how the coin landed. He wins if his guess is correct. The attacker, which we can model as an algorithm, is called an adversary. The function f (which the adversary was able to query) is called an oracle. Note that an adversary can trivially ensure a 50% chance of winning simply by guessing at random (or even by, for example, always guessing "heads"). Therefore, let PE(A) denote the probability that the adversary A wins this game against E, and define the advantage of A as 2(PE(A) − 1/2). It follows that if A guesses randomly, its advantage will be 0; on the other hand, if A always wins, then its advantage is 1. The block cipher E is a pseudo-random permutation (PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions on q and the adversary's running time. If in Step 2 above adversaries have the option of learning f−1(X) instead of f(X) (but still have only small advantages) then E is a strong PRP (SPRP). An adversary is non-adaptive if it chooses all q values for X before the game begins (that is, it does not use any information gleaned from previous queries to choose each X as it goes). These definitions have proven useful for analyzing various modes of operation. For example, one can define a similar game for measuring the security of a block cipher-based encryption algorithm, and then try to show (through a reduction argument) that the probability of an adversary winning this new game is not much more than PE(A) for some A. (The reduction typically provides limits on q and the running time of A.) Equivalently, if PE(A) is small for all relevant A, then no attacker has a significant probability of winning the new game. This formalizes the idea that the higher-level algorithm inherits the block cipher's security. Ideal cipher model Practical evaluation Block ciphers may be evaluated according to multiple criteria in practice. Common factors include: Key parameters, such as its key size and block size, both of which provide an upper bound on the security of the cipher. The estimated security level, which is based on the confidence gained in the block cipher design after it has largely withstood major efforts in cryptanalysis over time, the design's mathematical soundness, and the existence of practical or certificational attacks. The cipher's complexity and its suitability for implementation in hardware or software. Hardware implementations may measure the complexity in terms of gate count or energy consumption, which are important parameters for resource-constrained devices. The cipher's performance in terms of processing throughput on various platforms, including its memory requirements. The cost of the cipher, which refers to licensing requirements that may apply due to intellectual property rights. The flexibility of the cipher, which includes its ability to support multiple key sizes and block lengths. Notable block ciphers Lucifer / DES Lucifer is generally considered to be the first civilian block cipher, developed at IBM in the 1970s based on work done by Horst Feistel. A revised version of the algorithm was adopted as a U.S. government Federal Information Processing Standard: FIPS PUB 46 Data Encryption Standard (DES). It was chosen by the U.S. National Bureau of Standards (NBS) after a public invitation for submissions and some internal changes by NBS (and, potentially, the NSA). DES was publicly released in 1976 and has been widely used. DES was designed to, among other things, resist a certain cryptanalytic attack known to the NSA and rediscovered by IBM, though unknown publicly until rediscovered again and published by Eli Biham and Adi Shamir in the late 1980s. The technique is called differential cryptanalysis and remains one of the few general attacks against block ciphers; linear cryptanalysis is another, but may have been unknown even to the NSA, prior to its publication by Mitsuru Matsui. DES prompted a large amount of other work and publications in cryptography and cryptanalysis in the open community and it inspired many new cipher designs. DES has a block size of 64 bits and a key size of 56 bits. 64-bit blocks became common in block cipher designs after DES. Key length depended on several factors, including government regulation. Many observers in the 1970s commented that the 56-bit key length used for DES was too short. As time went on, its inadequacy became apparent, especially after a special purpose machine designed to break DES was demonstrated in 1998 by the Electronic Frontier Foundation. An extension to DES, Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). It was widely adopted as a replacement. As of 2011, the three-key version is still considered secure, though the National Institute of Standards and Technology (NIST) standards no longer permit the use of the two-key version in new applications, due to its 80-bit security level. IDEA The International Data Encryption Algorithm (IDEA) is a block cipher designed by James Massey of ETH Zurich and Xuejia Lai; it was first described in 1991, as an intended replacement for DES. IDEA operates on 64-bit blocks using a 128-bit key, and consists of a series of eight identical transformations (a round) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups – modular addition and multiplication, and bitwise exclusive or (XOR) – which are algebraically "incompatible" in some sense. The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. , the best attack which applies to all keys can break full 8.5-round IDEA using a narrow-bicliques attack about four times faster than brute force. RC5 RC5 is a block cipher designed by Ronald Rivest in 1994 which, unlike many other ciphers, has a variable block size (32, 64 or 128 bits), key size (0 to 2040 bits) and number of rounds (0 to 255). The original suggested choice of parameters were a block size of 64 bits, a 128-bit key and 12 rounds. A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number of modular additions and XORs. The general structure of the algorithm is a Feistel-like network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden ratio as sources of "nothing up my sleeve numbers". The tantalising simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts. 12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. 18–20 rounds are suggested as sufficient protection. Rijndael / AES The Rijndael cipher developed by Belgian cryptographers, Joan Daemen and Vincent Rijmen was one of the competing designs to replace DES. It won the 5-year public competition to become the AES, (Advanced Encryption Standard). Adopted by NIST in 2001, AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum of 128 bits. The blocksize has a maximum of 256 bits, but the keysize has no theoretical maximum. AES operates on a 4×4 column-major order matrix of bytes, termed the state (versions of Rijndael with a larger block size have additional columns in the state). Blowfish Blowfish is a block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Blowfish has a 64-bit block size and a variable key length from 1 bit up to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. Notable features of the design include the key-dependent S-boxes and a highly complex key schedule. It was designed as a general-purpose algorithm, intended as an alternative to the ageing DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered by patents or were commercial/government secrets. Schneier has stated that, "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in the public domain, and can be freely used by anyone." The same applies to Twofish, a successor algorithm from Schneier. Generalizations Tweakable block ciphers M. Liskov, R. Rivest, and D. Wagner have described a generalized version of block ciphers called "tweakable" block ciphers. A tweakable block cipher accepts a second input called the tweak along with its usual plaintext or ciphertext input. The tweak, along with the key, selects the permutation computed by the cipher. If changing tweaks is sufficiently lightweight (compared with a usually fairly expensive key setup operation), then some interesting new operation modes become possible. The disk encryption theory article describes some of these modes. Format-preserving encryption Block ciphers traditionally work over a binary alphabet. That is, both the input and the output are binary strings, consisting of n zeroes and ones. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. This is an example of format-preserving encryption. More generally, format-preserving encryption requires a keyed permutation on some finite language. This makes format-preserving encryption schemes a natural generalization of (tweakable) block ciphers. In contrast, traditional encryption schemes, such as CBC, are not permutations because the same plaintext can encrypt to multiple different ciphertexts, even when using a fixed key. Relation to other cryptographic primitives Block ciphers can be used to build other cryptographic primitives, such as those below. For these other primitives to be cryptographically secure, care has to be taken to build them the right way. Stream ciphers can be built using block ciphers. OFB-mode and CTR mode are block modes that turn a block cipher into a stream cipher. Cryptographic hash functions can be built using block ciphers. See one-way compression function for descriptions of several such methods. The methods resemble the block cipher modes of operation usually used for encryption. Cryptographically secure pseudorandom number generators (CSPRNGs) can be built using block ciphers. Secure pseudorandom permutations of arbitrarily sized finite sets can be constructed with block ciphers; see Format-Preserving Encryption. A publicly known unpredictable permutation combined with key whitening is enough to construct a block cipher -- such as the single-key Even-Mansour cipher, perhaps the simplest possible provably secure block cipher. Message authentication codes (MACs) are often built from block ciphers. CBC-MAC, OMAC and PMAC are such MACs. Authenticated encryption is also built from block ciphers. It means to both encrypt and MAC at the same time. That is to both provide confidentiality and authentication. CCM, EAX, GCM and OCB are such authenticated encryption modes. Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Examples of such block ciphers are SHACAL, BEAR and LION. See also Cipher security summary Topics in cryptography XOR cipher References Further reading External links A list of many symmetric algorithms, the majority of which are block ciphers. The block cipher lounge What is a block cipher? from RSA FAQ Block Cipher based on Gold Sequences and Chaotic Logistic Tent System Cryptographic primitives Arab inventions Egyptian inventions
51894563
https://en.wikipedia.org/wiki/Scoro
Scoro
Scoro is a software-as-a-service solution for professional and creative services. The all-in-one business management software combines project management with time and team management, sales, billing, and professional services automation. The company has offices in London, New York, Tallinn, Riga, and Vilnius. History Scoro was founded in 2013, in Tallinn, Estonia. In August 2016, Scoro secured $1.9 million in investment capital during its Seed round of funding, which was led by three VCs: Inventure, SmartCap, and Alchemist Accelerator. In November 2018, Scoro closed a $5 million Series A round led by Livonia Partners with participation from existing investors Inventure and Tera Ventures. The deal brings the total amount raised from investors to $6.9 million. The company has been named one of the fastest growing technology firms in Central Europe by the Deloitte Technology Fast 500. Scoro has also been featured on the Inc. magazine's Inc. 5000 list as one of the fastest-growing private companies in America. In 2019 Scoro was named as one of the hottest young companies across Europe and Israel by TNW. Software Scoro is a collaborative Software-as-a-Service (SaaS) product that enables its users to manage and track projects, keep track of finances, manage the clients base, compile and send quotes and invoices, and get enterprise-level reports. Features Calendar Tasks Projects Timesheets Resource planning Time tracker Planner Phases & milestones Project templates Quotes Bills Invoices & receipts Sales pipeline Customer database statistics Budgets & forecasts Dashboards News Feed Work report Utilization report Financial report Pipeline report Sales report Margin report Supplier report WIP report Mobile Scoro introduced its version 1.0 iOS and Android application in 2016, allowing users to work from their iPhone or Android devices. Integrations Google Calendar Apple Calendar Outlook Dropbox Google Drive Toggl Zapier MS Exchange FTP QuickBooks Xero MailChimp Recognition and awards 2017 Technology Fast 50 by Deloitte 2018 Inc. 5000 fastest-growing companies in America Easiest to Use Business Process Management Software by G2 2019 Best Software Companies in EMEA by G2 FrontRunners for Project Management by Software Advice Category leader for Business Management category by GetApp Category leader for Project Management category by GetApp Easiest to Use Business Software by G2 Locations Scoro's headquarters is situated in London, UK. In April, 2018 Scoro announced the opening of a new office in New York, USA. Scoro also has offices in Tallinn, Riga, and Vilnius. References Software companies of Estonia Software companies based in London British companies established in 2013
11060547
https://en.wikipedia.org/wiki/A12%20Authentication
A12 Authentication
A12 Authentication (Access Authentication for 1xEV-DO) is a CHAP-based mechanism used by a CDMA2000 Access Network (AN) to authenticate a 1xEV-DO Access Terminal (AT). Evolution-Data Optimized (EV-DO, EVDO, etc.) is a telecommunications standard for the wireless transmission of data through radio signals, typically for broadband Internet access. In computing, the Challenge-Handshake Authentication Protocol (CHAP) authenticates a user or network host to an authenticating entity. That entity may be, for example, an Internet service provider. CDMA2000 is the core wireless air interface standard. Description A12 authentication occurs when an AT first attempts to access the AN and is repeated after some authentication timeout period. The element in the AN that performs this authentication is the Radio Network Controller (RNC) using its Access Network AAA (AN-AAA). In order to support A12 authentication, matching A12 credentials (i.e., an A12 Network Address Identifier (NAI) and A12 CHAP key) must be provisioned into the AT and the user's home AAA server. Since these credentials are only shared between the AT and its home AAA, the AN-AAA forwards A12 challenge responses received from an AT to its home AAA to determine whether they are correct. A12 authentication is separate from packet data authentication that may occur later when a data session is being established. A12 authentication is important for roaming since all participating operators in the IRT have agreed to support it. If A12 credentials are not provisioned into an AT, that AT will not be able to access any visited network that performs A12 authentication. In addition, the Mobile Node Identifier (MN ID) is obtained from the AN-AAA during successful A12 authentication. This MN ID is used by the AN on the A8/A9 and A10/A11 interfaces to enable handoffs of Packet Data Serving Node (PDSN) packet data sessions between ANs and between 1xEV-DO and 1xRTT systems. If A12 authentication is not performed, the MN ID must be somehow derived and such handoffs may not be possible without establishing a new Point-to-Point Protocol (PPP) session. A12 authentication is defined in TIA-878 (3GPP2 A.S0008). See also List of authentication protocols List of CDMA2000 networks Mobile broadband References see CDG Reference Document #136. Code division multiple access
81737
https://en.wikipedia.org/wiki/Andromache
Andromache
In Greek mythology, Andromache (; , ) was the wife of Hector, daughter of Eetion, and sister to Podes. She was born and raised in the city of Cilician Thebe, over which her father ruled. The name means 'man battler' or 'fighter of men' or 'man fighter' (note that there was also a famous Amazon warrior named Andromache, probably in this meaning) or 'man's battle' (that is: 'courage' or 'manly virtue'), from the Greek stem 'man' and 'battle'. During the Trojan War, after Hector was killed by Achilles and the city taken by the Greeks, the Greek herald Talthybius informed her of the plan to kill Astyanax, her son by Hector, by throwing him from the city walls. This act was carried out by Neoptolemus who then took Andromache as a concubine and Hector's brother, Helenus, as a slave. By Neoptolemus, she was the mother of Molossus, and according to Pausanias, of Pielus and Pergamus. When Neoptolemus died, Andromache married Helenus and became Queen of Epirus. Pausanias also implies that Helenus's son, Cestrinus, was by Andromache. In Epirus Andromache faithfully continued to make offerings at Hector’s cenotaph. Andromache eventually went to live with her youngest son, Pergamus in Pergamum, where she died of old age. Andromache was famous for her fidelity and virtue; her character represents the suffering of Trojan women during war. Classical treatment Homer. Iliad VI, 390–470: XXII 437–515 Sappho's Fragment 44 Euripides. Andromache. Euripides. The Trojan Women. Ennius. Andromacha TrRF II 23. Virgil. Aeneid III, 294–355. Ovid. Ars Amatoria III, 777–778. Seneca. The Trojan Women. Bibliotheca III, xii, 6, Epitome V, 23; VI, 12. Families Andromache was born in Thebe, a city that Achilles later sacked, killing her father Eetion and seven brothers. After this, her mother died of illness (6.425). She was taken from her father's household by Hector, who had brought countless wedding-gifts (22.470-72). Thus Priam’s household alone provides Andromache with her only familial support. In contrast to the inappropriate relationship of Paris and Helen, Hector and Andromache fit the Greek ideal of a happy and productive marriage, which heightens the tragedy of their shared misfortune. Andromache is alone after Troy falls and her son is killed. Notably, Andromache remains unnamed in Iliad 22, referred to only as the wife of Hector (Greek alokhos), indicating the centrality of her status as Hector's wife and of the marriage itself to her identity. The Greeks divide the Trojan women as spoils of war and permanently separate them from the ruins of Troy and from one another. Hector's fears of her life as a captive woman are realized as her family is entirely stripped from her by the violence of war, as she fulfills the fate of conquered women in ancient warfare (6.450–465). Without her familial structure, Andromache is a displaced woman who must live outside familiar and even safe societal boundaries. Role in mourning her husband Andromache's gradual discovery of her husband's death and her immediate lamentation (22.437–515) culminate the shorter lamentations of Priam and Hecuba upon Hector's death (22.405–36). In accordance with traditional customs of mourning, Andromache responds with an immediate and impulsive outburst of grief (goos) that begins the ritual lamentation. She casts away her various pieces of headdress (22.468-72) and leads the Trojan women in ritual mourning, both of which they did (22.405–36). Although Andromache adheres to the formal practice of female lamentation in Homeric epic, the raw emotion of her discovery yields a miserable beginning to a new era in her life without her husband and, ultimately, without a home. The final stage of the mourning process occurs in Iliad 24 in the formal, communal grieving (thrēnos) upon the return of Hector's body (24.703–804). In a fragment of Ennius' Andromacha, quoted by Cicero in the Tusculan Disputations (3.44-46), Andromacha sings about her loss of Hector. Duties as wife In Iliad 22, Andromache is portrayed as the perfect wife, weaving a cloak for her husband in the innermost chambers of the house and preparing a bath in anticipation of his return from battle (22.440–6). Here she is carrying out an action Hector had ordered her to perform during their conversation in Iliad 6 (6.490–92), and this obedience is another display of womanly virtue in Homer's eyes. However, Andromache is seen in Iliad 6 in an unusual place for the traditional housewife, standing before the ramparts of Troy (6.370–373). Traditional gender roles are breached as well, as Andromache gives Hector military advice (6.433–439). Although her behavior may seem nontraditional, hard times disrupt the separate spheres of men and women, requiring a shared civic response to the defence of the city as a whole. Andromache's sudden tactical lecture is a way to keep Hector close, by guarding a section of the wall instead of fighting out in the plains. Andromache's role as a mother, a fundamental element of her position in marriage, is emphasized within this same conversation. Their infant son, Astyanax, is also present at the ramparts as a maid tends to him. Hector takes his son from the maid, yet returns him to his wife, a small action that provides great insight into the importance Homer placed on her care-taking duties as mother (6.466–483). A bonding moment between mother and father occurs in this scene when Hector's helmet scares Astyanax, providing a moment of light relief in the story. After Hector's death in Iliad 22, Andromache's foremost concern is Astyanax's fate as a mistreated orphan (22.477–514). In Euripides' The Trojan Women, Andromache despairs at the murder of her son Astyanax and is then given to Neoptolemus as a concubine. In his Andromache, Euripides dramatizes when she and her child were nearly assassinated by Hermione, the wife of Neoptolemus and daughter of Helen and Menelaus. Modern treatment She is also the subject of a tragedy by French classical playwright Jean Racine (1639–1699), entitled Andromaque, and a minor character in Shakespeare's Troilus and Cressida. "The Andromache" is referenced in The Duc De L'Omelette written by Edgar Allan Poe in published in 1832. In 1857, she also importantly appears in Baudelaire's poem, "Le Cygne", in Les Fleurs du Mal. Andromache is the subject of a 1932 opera by German composer Herbert Windt and also a lyric scena for soprano and orchestra by Samuel Barber. She was portrayed by Vanessa Redgrave in the 1971 film version of Euripides' The Trojan Women, and by Saffron Burrows in the 2004 film Troy. She also appears as a character in David Gemmell's Troy series. In the 2018 TV miniseries Troy: Fall of a City, she was portrayed by Chloe Pirrie. She is reimagined in The Old Guard as a present-day immortal paladin who aspires to improve the human condition despite her cynicism rooted in ongoing global conflict. References External links Characters in the Aeneid Princesses in Greek mythology Trojans Women of the Trojan war Women in Greek mythology Characters in Greek mythology
52315025
https://en.wikipedia.org/wiki/OneSpan
OneSpan
OneSpan (formerly Vasco Data Security International, Inc.) is a publicly traded cybersecurity technology company based in Chicago, Illinois with offices in Montreal, Brussels and Zurich. The company offers a cloud-based and open architected anti-fraud platform and is historically known for its multi-factor authentication and electronic signature software. It was founded by T. Kendall Hunt in 1991 and held its initial public offering (IPO) in January 2000. OneSpan is a member of the FIDO Alliance Board. History In 1984, T. Kendall Hunt founded Vasco as a consulting and software services company for corporate and governmental agencies. The company acquired ThumbScan, which claimed to have the first fingerprint reader device for a computer, in 1991. It was renamed Vasco Data Security International by 1993 and expanded its offerings to include data security. Vasco was incorporated in 1997 and held its initial public offering in January 2000. Vasco started developing its Digipass technology in the early 2000s. The company marketed the technology internationally in Belgium. In 2009, Vasco announced that Digipass two-factor authentication was available in the App Store for iPhone and iPod Touch. Forbes recognized Vasco on its list of "America's Fastest-Growing Tech Companies" that year. In January 2011, Vasco acquired DigiNotar, a Dutch certificate authority. In June 2011, DigiNotar was hacked and started issuing false security certificates. When the news broke, confidence was shattered, all issued certificates cancelled and the company went bankrupt. The company established its international headquarters at Dubai Silicon Oasis in 2012. Vasco announced that it would lower EMEA channel entry for VARs at that time. It became a member of the Fast IDentity Online (FIDO) Alliance in June 2014 and was later recognized by Gartner's Magic Quadrant for User Authentication. In October 2015, Vasco acquired Silanis Technology, a Canadian document e-signature company, for $113 million. By early 2016, the company's cloud electronic signature software, eSignLive, was updated to include integration with Salesforce. Vasco announced a face recognition authentification feature for Digipass in May 2016. The company has partnerships with financial institutions including HSBC Bank USA, Fedict, Rabobank, Arab Bank and Riyadh Bank. OneSpan On May 30, 2018, Vasco changed its name to OneSpan. It now trades under the ticker symbol OSPN. In May 2018, the company acquired Dealflo, a UK and Canada-based financial agreement automation software company, for GBP 41 million. Technology Identity Verification: validate ID documents and consumer identities via third-party identity and verification services through a single API integration. Authentication: authenticate users and transactions using a range of multi-factor authentication services, including hardware & software tokens and biometric capabilities Risk Analytics: analyze mobile, app and transaction data, in real-time, to detect fraud Mobile App Security: detect and mitigate malicious mobile app attacks before they can do damage E-Signature: enable customers to e-sign on any device, while strengthening compliance Services are delivered through OneSpan's open, cloud-based Trusted Identity (TID) platform. References External links Oakbrook Terrace, Illinois Companies established in 1991 Computer security companies Companies based in DuPage County, Illinois
551104
https://en.wikipedia.org/wiki/Cylinder%201024
Cylinder 1024
Cylinder 1024 is the first cylinder of a hard disk that was inaccessible in the original IBM PC compatible hardware specification, interrupt 13h, which uses cylinder-head-sector addressing. At boot time, the BIOS of many very old PCs could only access the first 1024 cylinders, numbered 0 to 1023, as the specific CHS addressing used by the BIOS interrupt 13 API only defines 10 bits for the cylinder count (2^10=1024). This was a problem for operating systems on the x86 platform as the BIOS must be able to load the bootloader and the entire kernel image into memory. Both of these must, therefore, be located on the first 1024 cylinders of the disk. Older versions of Microsoft Windows resolved this by necessitating that the operating system was installed to the first partition. Partly because of this bug, users of the Linux operating system have traditionally created a partition to reside within the first 1024 cylinders of the disk, containing little more than the kernel and bootloader. See also Cylinders 0 to 79 of an Amiga Disk File (ADF) External links — includes a discussion of the cylinder 1024 limitation. "Large Disk HOWTO - History of BIOS and IDE limits" IBM PC compatibles Rotating disc computer storage media Hard disk computer storage
958263
https://en.wikipedia.org/wiki/Green%20Hills%20Software
Green Hills Software
Green Hills Software is a privately owned company that builds operating systems and programming tools for embedded systems. The firm was founded in 1982 by Dan O'Dowd and Carl Rosenberg. Its world headquarters are in Santa Barbara, California. History Green Hills Software and Wind River Systems enacted a 99-year contract as cooperative peers in the embedded software engineering market throughout the 1990s, with their relationship ending in a series of lawsuits throughout the early 2000s. This resulted in their opposite parting of ways, whereupon Wind River devoted itself to publicly embrace Linux and open-source software but Green Hills initiated a public relations campaign decrying its use in issues of national security. In 2008, the Green Hills real-time operating system (RTOS) named Integrity-178 was the first system to be certified by the National Information Assurance Partnership (NIAP), composed of National Security Agency (NSA) and National Institute of Standards and Technology (NIST), to Evaluation Assurance Level (EAL) 6+. By November 2008, it was announced that a commercialized version of Integrity 178-B will be available to be sold to the private sector by Integrity Global Security, a subsidiary of Green Hills Software. On March 27, 2012, a contract was announced between Green Hills Software and Nintendo. This designates MULTI as the official integrated development environment and toolchain for Nintendo and its licensed developers to program the Wii U video game console. On February 25, 2014, it was announced that the operating system Integrity had been chosen by Urban Aeronautics for their AirMule flying car unmanned aerial vehicle (UAV), since renamed the Tactical Robotics Cormorant. Selected products Real-time operating systems Integrity is a POSIX real-time operating system (RTOS). An Integrity variant, named Integrity-178B, was certified to Common Criteria Evaluation Assurance Level (EAL) 6+, High Robustness in November 2008. Micro Velosity (stylized as µ-velOSity) is a real-time microkernel for resource-constrained devices. Compilers Green Hills produces compilers for the programming languages C, C++, Fortran, and Ada. They are cross-platform, for 32- and 64-bit microprocessors, including ARM, Blackfin, ColdFire, MIPS, PowerPC, SuperH, StarCore, x86, V850, and XScale. Integrated development environments MULTI is an integrated development environment (IDE) for the programming languages C, C++, Embedded C++ (EC++), and Ada, aimed at embedded engineers. TimeMachine is a set of tools for optimizing and debugging C and C++ software. TimeMachine (introduced 2003) supports reverse debugging, a feature that later also became available in the free GNU Debugger (GDB) 7.0 (2009). References Software companies based in California Computer companies established in 1982 1982 establishments in California Companies based in Santa Barbara County, California Microkernels Software companies of the United States
7278723
https://en.wikipedia.org/wiki/Dan%20Boneh
Dan Boneh
Dan Boneh (; ) is an Israeli-American professor in applied cryptography and computer security at Stanford University. In 2016, Boneh was elected a member of the National Academy of Engineering for contributions to the theory and practice of cryptography and computer security. Biography Born in Israel in 1969, Boneh obtained his Ph.D. in Computer Science from Princeton University in 1996 under the supervision of Richard J. Lipton. Boneh is one of the principal contributors to the development of pairing-based cryptography, along with Matt Franklin of the University of California, Davis. He joined the faculty of Stanford University in 1997, and became professor of computer science and electrical engineering. He teaches massive open online courses on the online learning platform Coursera. In 1999 he was awarded a fellowship from the David and Lucile Packard Foundation. In 2002, he co-founded a company called Voltage Security with three of his students. The company was acquired by Hewlett-Packard in 2015. In 2018, Boneh became co-director (with David Mazières) of the newly founded Center for Blockchain Research at Stanford, predicting at the time that "Blockchains will become increasingly critical to doing business globally." Dr. Boneh is also known for putting his entire introductory cryptography course online for free. The course is also available via Coursera. Awards 2021 Fellow of the American Mathematical Society 2020 Selfridge Prize with Jonathan Love. 2016 Elected to the US National Academy of Engineering 2016 Fellow of the Association for Computing Machinery 2014 ACM Prize in Computing (formerly called the ACM-Infosys Foundation award) 2013 Gödel Prize, with Matthew K. Franklin and Antoine Joux, for his work on the Boneh–Franklin scheme 2005 RSA Award 1999 Sloan Research Fellowship 1999 Packard Award Publications Some of Boneh's results in cryptography include: 2018: Verifiable Delay Functions 2015: Privacy-preserving proofs of solvency for Bitcoin exchanges 2010: Efficient Identity-Based Encryption from Learning with Errors Assumption (with Shweta Agrawal and Xavier Boyen) 2010: He was involved in designing tcpcrypt, TCP extensions for transport-level security 2005: A partially homomorphic cryptosystem (with Eu-Jin Goh and Kobbi Nissim) 2005: The first broadcast encryption system with full collision resistance (with Craig Gentry and Brent Waters) 2003: A timing attack on OpenSSL (with David Brumley) 2001: An efficient identity-based encryption system (with Matt Franklin) based on the Weil pairing. 1999: Cryptanalysis of RSA when the private key is less than N0.292 (with Glenn Durfee) 1997: Fault-based cryptanalysis of public-key systems (with Richard J. Lipton and Richard DeMillo) 1995: Collision resistant fingerprinting codes for digital data (with James Shaw) 1995: Cryptanalysis using a DNA computer (with Christopher Dunworth and Richard J. Lipton) Some of his contributions in computer security include: 2007: "Show[ing] that the time web sites take to respond to HTTP requests can leak private information." 2005: PwdHash a browser extension that transparently produces a different password for each site References External links Dan Boneh's Home Page Dan Boneh's Stanford Research Group Living people 1969 births Israeli computer scientists Modern cryptographers Public-key cryptographers Computer security academics Stanford University School of Engineering faculty Stanford University Department of Electrical Engineering faculty Israeli cryptographers Princeton University alumni Technion – Israel Institute of Technology alumni Fellows of the American Mathematical Society Fellows of the Association for Computing Machinery Gödel Prize laureates Simons Investigator Recipients of the ACM Prize in Computing People associated with cryptocurrency
6552322
https://en.wikipedia.org/wiki/The%20Pioneers%20%28band%29
The Pioneers (band)
The Pioneers are a Jamaican reggae vocal trio, whose main period of success was in the 1960s. The trio has had different line-ups, and still occasionally performs. Career Founding and early years: 1962-67 The Pioneers were formed in 1962 by brothers Sydney and Derrick Crooks, and their friend Winston Hewitt. Their early recordings "Good Nanny" and "I'll Never Come Running Back to You" were self-produced at the Treasure Isle studio using money lent to the Crooks brothers by their mother and appeared on Ken Lack's Caltone label. Several other singles followed, none of them hits, before Hewitt emigrated to Canada in 1966. Hewitt was replaced for around a year by former Heptone Glen Adams. The Pioneers' early singles were not successful, and Sydney began promoting concerts, while Derrick took up a job with the Alcoa bauxite company. The group broke up in mid-1967. Revival: 1967-68 Sydney began working at Joe Gibbs' record shop, and through Gibbs, returned to recording. At his first session (to record "Give Me Little Loving"), with the other members of The Pioneers gone, Crooks recruited Jackie Robinson, who he found outside the studio just before recording began. Crooks later said of the encounter: "When I was about to voice the song I looked outside the studio and I saw a little boy sitting on a stone. I said 'Hey, come here man, you can sing?' He sang the harmony for 'Give Me Little Loving' and his name was Jackie Robinson. After that I said to him 'You are one of the Pioneers from today' and he became the lead singer of the Pioneers". The new version of The Pioneers enjoyed success with singles such as "Longshot" (a track written and produced by Lee "Scratch" Perry on Gibbs' behalf about a long-lived but unsuccessful racehorse), "Jackpot", "Catch the Beat", and "Pan Yu Machete" (an attack on Perry, who left Gibbs in 1968 to start working on his own productions). Crooks and Robinson also recorded as The Soul Mates in 1967. The group parted ways with Gibbs after an argument and moved on to work with Leslie Kong, the first recording for Kong being "Samfie Man", a song about a confidence trickster, which topped the Jamaican singles chart. The classic trio, and the move to the UK: 1969-77 After a few further singles with Kong, the group recruited Desmond Dekker's half-brother George Agard to become a trio again. Sydney Crooks and his former Pioneer brother Derrick, along with Winston Bailey also recorded as The Slickers, recording "Nana" for producer Neremiah Reid. The Pioneers scored again with a sequel to "Long Shot", "Long Shot (Kick De Bucket)". When Kong heard that the horse had died (during its 203rd race), he insisted that the group write a song about it; The song was written and recorded quickly and became an instant hit. The band was popular in the United Kingdom, particularly among skinheads. "Long Shot Kick de Bucket" was a big hit in 1969, and led to a tour of the UK, during which they resolved to relocate there. Their cover of Jimmy Cliff's "Let Your Yeah Be Yeah" made No. 5 as a single in 1971. The band moved to the UK in 1970. Their third UK hit was "Give and Take", which reached No. 35 in January 1972. Soul years: 1976-79 In 1976, the Pioneers teamed up with Eddy Grant for an album for Mercury Records called Feel The Rhythm, which featured a nude female model on its cover. Grant preferred to produce them as a soul group and they released a number of singles in that idiom, including "Broken Man", "Feel The Rhythm" and "My Good Friend James" The change of style was a critical but not a commercial success and the band split up for a time in the late 1970s, with Crooks concentrating on production work and continuing with his brother in The Slickers, while Agard and Robinson continued to record, together on the album George & Jackie Sing, and separately. First reformation: 1979-89 The group reformed in the late 1970s and continued until 1989, when they split again to concentrate on separate careers. "Long Shot Kick de Bucket" was covered by The Specials on their The Special AKA Live! EP, which was a UK No. 1 hit in 1980 and resulted in The Pioneers' original version being reissued (as a double A side with "The Liquidator" by Harry J Allstars) and reaching No. 42 in the UK Singles Chart. The Pioneers song "Starvation" was also covered on the "Starvation/Tam Tam Pour L'Ethiopie" charity single released in 1985, which peaked at UK number 33. The Pioneers shared lead vocal duties on the single with members of UB40, with backing vocals by General Public. Second reformation: 1999-present In 1999, the group reformed again and have continued to perform together since. In 2005, the Pioneers performed at the Maranhão Roots Reggae Festival in São Luís, Brazil before 15,000 fans. The following year they appeared at the Godiva Festival in the War Memorial Park, Coventry, England. "Long Shot Kick de Bucket" was used in the 2008 film, The Wackness. Discography Albums Greetings from the Pioneers - 1968 - Amalgamated Records - produced by Joe Gibbs Long Shot - 1969 - Trojan Records - produced by Leslie Kong Battle of the Giants - 1970 - Trojan Records - produced by Leslie Kong Yeah! - 1971 - Trojan Records I Believe in Love - 1972 - Trojan Records Freedom Feeling - 1973 - Trojan Records I'm Gonna Knock on Your Door - 1974 - Trojan Records Feel the Rhythm - 1976 - Mercury Records Roll On Muddy River - 1977 - Trojan Records Pusher Man - 1978 - Squad Disco Pusher Man - 1978 - Trojan Records (different tracks to the Squad Disco release) Baby I Love You - 1979 - Taretone What a Feeling - 1980 - Pioneer International Reggae for Lovers - 1982 - Vista Sounds Reggae for Lovers Vol. 2 - 1983 - Vista Sounds More Reggae for Lovers Vol. 3 - 1985 - Vista Sounds More Reggae for Lovers Vol. 4 - 1985 - Vista Sounds Compilations From the Beginning 1969–1976 - Pioneer International Kick de Bucket - Rhino Records Greatest Hits - 1979 - Trojan Records ...Best of – Longshot Kick de Bucket - 1997 - Trojan Records Let Your Yeah Be Yeah: Anthology 1966 to 1986 - 2001 - Trojan Records Give and Take: The Best of The Pioneers - 2002 - Trojan Records Singles "Golden Opportunity" (1965), Wincox "River Bed" (1965), Wincox "Sometime" (1965), Island (B-side to Theo Beckford's "Trench Town People") "Good Nannie" (1966), Rio "Too Late" (1966), Rio "Teardrops to a Smile" (1967), Caltone (B-side to The Emotions' "A Rainbow") "Goodies Are the Greatest" (1967), Amalgamated "Long Shot (Bus Me Bet)" (1967), Amalgamated "Never Come Running Back" (1967), Rio (with The Ramblers) "Give Me Little Loving" (1968), Amalgamated "Catch the Beat" (1968), Amalgamated (with Gibson's All Star) "Don't You Know" (1968), Amalgamated "Easy Come Easy Go" (1968), Pyramid (with Beverley's All Stars) "Long Shot" (1968), Amalgamated "No Dope Me Pony" (1968), Amalgamated "Sweet Dreams" (1968), Amalgamated "Whip Them" (1968), Blue Cat "Jackpot" (1968), Amalgamated "I Love No Other Girl" (1968), Caltone (with The Ramblers) "Give It to Me" (1968), Blue Cat "Regga Beat" (1968), Blue Cat "Tickle Me for Days" (1968), Amalgamated "Shake It Up" (1968), Blue Cat "Poor Rameses" (1969), Trojan "Samfie Man" (1969), Trojan "Long Shot Kick the Bucket" (1969), Trojan (UK No. 21) "Who the Cap Fits" (1969), Amalgamated "Mama Look Deh" (1969), Amalgamated "Black Bud" (1969), Trojan "Alli Button" (1969), Amalgamated "Boss Festival" (1969) "Love Love Everyday" (1969), Amalgamated (B-side to Moon Boys' "Apollo 11") "Pee Pee Cluck Cluck" (1969), Pyramid "In Orbit" (1969), Beverly's "Money Day" (1970), Trojan "I Need Your Sweet Inspiration" (1970), Trojan "Simmer Down Quashie" (1970), Trojan "Driven Back" (1970), Trojan "Battle of the Giants" (1970), Trojan "Starvation" (1970), Summit "Let Your Yeah Be Yeah" (1971), Trojan (UK No. 5) "Give and Take" (1971), Trojan (UK No. 35) "Get Ready" (1971), Summit "Land of Complexion" (1971), Summit "I Am a Believer" (1971), Hot Shot "Roll Muddy River" (1972), Trojan "I Believe in Love" (1972), Trojan "You Don't Know Like I Know" (1972), Trojan "The World Needs Love" (1972), Trojan "Mother and Child Reunion" (1972), Trojan (with Greyhound) "Story Book Children" (1972), Summit "A Little Bit of Soap" (1973), Trojan "At the Discotheque" (1973), Trojan "Bad to Be Good" (1973), Trojan "Papa Was a Rolling Stone" (1973), Joe Gibbs "Some Livin' Some Dyin'" (197?), Trojan "Honey Bee" (1974), Trojan "Jamaica Jerk Off" (1974), Trojan "Sweet Number One" (1974), Trojan "I'm Gonna Knock on Your Door" (1974), Trojan "Do What You Wanna Do" (1975), Fontana "Feel the Rhythm (of You and I)" (1976), Mercury "Broken Man" (1976), Mercury "My Good Friend James" (1977), Mercury "My Special Prayer" (1977), Trojan "Mother Ritty" (19??), Beverley's "Riot in a Notting Hill" (1978), Trojan "Your Love Is Something Else" (1979), ICE "Rock My Soul" (1985), Creole "Reggae in London City" (1986), Trojan "Do It Right" (1986), Trojan "Bad Company" (198?), Pioneer International "Starvation" (198?), Boss "Bring Back the Yester Years" (1997), Joe Gibbs "Run Run Run" (19??), MGA "Mettle" (19??), Trojan "Nosey Parker" (19??), Pioneer "Pan Yu Machete" (19??) The Pioneers also had a number 42 UK hit in 1980 with a double-A-side release of "Long Shot Kick de Bucket" and Harry J All-Stars' "Liquidator", and a four-track EP consisting of tracks by The Pioneers, The Maytals, The Skatalites, and Jimmy Cliff reached number 86 in 1989. Cover versions The Pioneers track "Jackpot" was covered by The Beat on their 1980 album I Just Can't Stop It. Their song "Starvation" was also covered on the "Starvation/Tam Tam Pour L'Ethiopie" charity single released in 1985. The Selecter covered "Time Hard" as "Everyday" on their 1980 album Too Much Pressure. See also List of reggae musicians List of ska musicians Trojan skinhead Island Records discography References External links Official website Jamaican reggae musical groups Jamaican ska groups Musical groups established in 1962 Musical trios Island Records artists Trojan Records artists 1962 establishments in Jamaica
41235121
https://en.wikipedia.org/wiki/Jacob%20Reider
Jacob Reider
Jacob Reider is an American physician and expert in health information technology policy. Education Reider holds a Bachelor of the Arts in cognitive science from Hampshire College, and a Doctor of Medicine from Albany Medical College. Career Reider served as CMIO of Allscripts, a developer of Electronic Health Records, Chief Strategy Officer and later CEO of healthcare analytics startup company Kyron, based in Palo Alto, CA. He operated as Medical Director of Clinical Systems at CapitalCare Medical Group and Associate Dean of Biomedical Informatics at Albany Medical College. Reider has authored several papers on primary care topics and health information technology. Reider was Deputy National Coordinator and Chief Medical Officer of the Office of the National Coordinator for Health Information Technology, a staff division of the United States Department of Health and Human Services in the executive branch of the United States Government. Reider was named acting National Coordinator when Farzad Mostashari resigned in October 2013. Reider was recognized for leading the organization's work on clinical decision support, usability and health IT safety. During this time, Reider helped to create the regulatory framework for the United States' $30B of investments in health information technology. He was chair of the Health IT Standards Committee and led ONC's policy work on health IT certification and its relationship to the CMS EHR incentive programs. He was replaced by Karen DeSalvo on January 13, 2014. As of 2020, Reider serves as CEO of The Alliance for Better Health, a New York State DSRIP PPS and remains co-founder (with Bryan Sivak) of RS Partners, a consulting firm specializing in health innovation, Chief Health Officer of Physera, a health innovation start-up company, and a member of the Board of Directors of Avhana Health, a Clinical Decision Support company. Personal life Reider's father and grandfather were both psychiatrists. Reider has been married to Albany Law School Dean Alicia Ouellette for 29 years, and the couple has two children. Reider lives in Albany, NY and speaks several languages, including Spanish. References Date of birth missing (living people) Year of birth missing (living people) Living people Albany Medical College alumni American medical academics American physicians Hampshire College alumni Office of the National Coordinator for Health Information Technology
359440
https://en.wikipedia.org/wiki/Chris%20Crawford%20%28game%20designer%29
Chris Crawford (game designer)
Christopher Crawford (born June 1, 1950) is an American video game designer and writer. Hired by Alan Kay to work at Atari, Inc., he wrote the computer wargame Eastern Front (1941) for the Atari 8-bit family which was sold through the Atari Program Exchange and then later Atari's official product line. After leaving Atari, he wrote a string of games beginning with Balance of Power for Macintosh. Writing about the process of developing games, he became known among other creators in the nascent home computer game industry for his passionate advocacy of game design as an art form. He self-published The Journal of Computer Game Design and co-founded the Computer Game Developers Conference (later renamed to the Game Developers Conference). In 1992 Crawford withdrew from commercial game development and began experimenting with ideas for a next generation interactive storytelling system. In 2018, Crawford announced that he had halted his work on interactive storytelling, concluding that it will take centuries for civilization to embrace the required concepts. Biography Crawford was born in 1950 in Houston, Texas. After receiving a Bachelor's in physics from UC Davis in 1972 and a Master's in physics from the University of Missouri in 1975, Crawford taught at a community college and the University of California. Crawford first encountered computer games in Missouri, when he met someone attempting to computerize Avalon Hill's Blitzkrieg. While teaching, he wrote an early version of Tanktics in Fortran for the IBM 1130 in 1976 as a hobby, then wrote Tanktics and an early version of Legionnaire for personal computers such as the KIM-1 and Commodore PET. In 1978 Crawford began selling the games and by 1979 "made the startling discovery," he later said, "that it is far more lucrative and enjoyable to teach for fun and program for money." He joined Atari that year, founding the Games Research Group under Alan Kay in 1982. 1980s At Atari Crawford started game work with Wizard for the Atari VCS, but this work was abandoned. He then turned his attention to the new "Atari Home Computer System," now referred to as the Atari 8-bit family. His first releases on this platform were Energy Czar and Scram, both of which were written in Atari BASIC and published by Atari. He experimented with the Atari 8-bit computer's hardware-assisted smooth scrolling and used it to produce a scrolling map display. This work led to Eastern Front (1941), which is widely considered one of the first wargames on a microcomputer to compete with traditional paper-n-pencil games in terms of depth. Eastern Front was initially published through the Atari Program Exchange, which was intended for user-written software. It was later moved to Atari's official product line. He followed this with Legionnaire, based on the same display engine but adding real-time instead of turn-based game play. Using the knowledge gathered while writing these games, he helped produce technical documentation covering the custom hardware of the Atari 8-bit family, from the hardware-assisted smooth scrolling to digitized sounds, with the information presented in a friendly format for a wide audience. This included videos distributed by ACE (Atari Computer Enthusiast) Support to user groups, and a series of articles published in BYTE magazine containing most of the content of the book, De Re Atari that would be published later by the Atari Program Exchange. By 1983 BYTE called Crawford "easily the most innovative and talented person working on the Atari 400/800 computer today", and his name was well enough known that Avalon Hill's advertising for a revised version of Legionnaire mentioned Crawford as author. Laid off in 1984, in the collapse of Atari during the video game crash of 1983, Crawford went freelance and produced Balance of Power for the Macintosh in 1985, which was a best-seller, reaching 250,000 units sold. Crawford wrote a non-fiction book published by McGraw Hill in 1984: The Art of Computer Game Design Game Developers Conference The Game Developers Conference, which in 2013 drew over 23,000 attendees, was conceived of in 1987. The first gathering was held in 1988 as salon in Crawford's living room with roughly 27 game design friends and associates. The gathering's original name, the Computer Game Developers Conference, would remain into the 1990s until the word Computer was dropped. While the GDC has become a prominent event in the gaming industry, Crawford was eventually ousted from the GDC board, and made his final official appearance at the gathering in 1994. He eventually returned to the conference, giving lectures in both 2001 and 2006. Withdrawal from game industry Crawford acknowledged that his views on computer game design were unusual and controversial. In a 1986 interview with Computer Gaming World he stated that he began writing software as a hobby that became a job with the goal of writing the best possible game. Crawford said that by 1982, his goal was to pursue computer games as an art form. While denouncing hack and slash games ("just straight run, kill or be killed"), text adventures ("about as interesting as a refrigerator light"), and the Commodore 64 and Apple II ("so gutless. I don't feel I can do an interesting game on them"), he stated that Danielle Bunten Berry, Jon Freeman and Anne Westfall, and himself were the only designers who had proven that they could develop more than one great game. Crawford admitted that some critics called his games inaccessible: At the 1992 CGDC, Chris Crawford gave "The Dragon Speech", which he considers "the finest speech of [his] life". Throughout the speech, he used a dragon as a metaphor for video games as a medium of artistic expression. He declared that he and the video game industry were working "at cross purposes", with the industry focusing heavily on "depth", when Crawford wanted more "breadth": to explore new horizons rather than merely furthering what has already been explored. He arrived at the conclusion that he must leave the gaming industry in order to pursue this dream. He declared that he knew that this idea was insane, but he compared this "insanity" to that of Don Quixote: At the end of the speech, Crawford confronts the dragon: Crawford then charged down the lecture hall and out the door. Storyworlds After his "Dragon" speech, at GDC 1993, and his apparent exit from the gaming industry, Crawford did appear at GDC the following year but had not abandoned his unconventional views on game design. Computer Gaming World wrote after the 1993 conference that Crawford "has opted to focus upon a narrow niche of interactive art lovers rather than continuing to reach as many gamers as possible". He served as editor of Interactive Entertainment Design, a monthly collection of essays written for game designers. Since then, Crawford has been working on Storytron (originally known as Erasmatron), an engine for running interactive electronic storyworlds. , a beta version of the Storytronics authoring tool, Swat, has been released. The system was officially launched March 23, 2009, with Crawford's storyworld sequel to Balance of Power. As of December 1, 2012, the project has been in a "medically induced coma." In August 2013 Crawford released source code of several of his games from his career to the public, fulfilling a 2011 given promise, among them Eastern Front (1941) and Balance of Power. People games "People games", as termed by Crawford, are games where the goals are of a social nature and focus on interactions with well-defined characters. They are described in Chris Crawford on Game Design as well as in his "The Dragon Speech", as follows: Bibliography De Re Atari (contributor) (1982) The Art of Computer Game Design (1984) Balance of Power (Microsoft Press, 1986) - a book about the making of the game The Art of Interactive Design (No Starch Press, 2002) Chris Crawford on Game Design (New Riders Press, 2003) Chris Crawford on Interactive Storytelling (New Riders Press, 2004) Games Tanktics (1978) Energy Czar (1980) Wizard (1980 but only released 25 years later, with the Atari Flashback 2) Scram (1981) Eastern Front (1941) (1981) Legionnaire (1982) Gossip (1983) Excalibur (1983) Balance of Power (1985) Patton Versus Rommel (1986) Trust & Betrayal: The Legacy of Siboot (1987) Balance of Power: The 1990 Edition (1989) The Global Dilemma: Guns or Butter (1990) Balance of the Planet (1990) Patton Strikes Back (1991) Balance of Power: 21st Century (2009) References External links Erasmatazz - Chris Crawford's personal website Chris Crawford profile at MobyGames A Conversation with Chris Crawford in The Escapist webmagazine Video Games are Dead: A Chat With Storytronics Guru Chris Crawford at Gamasutra , with Jason Rohrer and Chris Crawford 1950 births American video game designers Living people Writers from Houston Atari people University of California, Davis alumni University of Missouri alumni
67508632
https://en.wikipedia.org/wiki/Himabindu%20Lakkaraju
Himabindu Lakkaraju
Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review. She is also known for her efforts to make the field of machine learning more accessible to the general public. Lakkaraju co-founded the Trustworthy ML Initiative (TrustML) to lower entry barriers and promote research on interpretability, fairness, privacy, and robustness of machine learning models. She has also developed several tutorials and a full-fledged course on the topic of explainable machine learning. Early life and education Lakkaraju obtained a masters degree in computer science from the Indian Institute of Science in Bangalore. As part of her masters thesis, she worked on probabilistic graphical models and developed semi-supervised topic models which can be used to automatically extract sentiment and concepts from customer reviews. This work was published at the SIAM International Conference on Data Mining, and won the Best Research Paper Award at the conference. She then spent two years as a research engineer at IBM Research, India in Bangalore before moving to Stanford University to pursue her PhD in computer science. Her doctoral thesis was advised by Jure Leskovec. She also collaborated with Jon Kleinberg, Cynthia Rudin, and Sendhil Mullainathan during her PhD. Her doctoral research focused on developing interpretable and fair machine learning models that can complement human decision making in domains such as healthcare, criminal justice, and education. This work was awarded the Microsoft Research Dissertation Grant and the INFORMS Best Data Mining Paper prize. During her PhD, Lakkaraju spent a summer working as a research fellow at the Data Science for Social Good program at University of Chicago. As part of this program, she collaborated with Rayid Ghani to develop machine learning models which can identify at-risk students and also prescribe appropriate interventions. This research was leveraged by schools in Montgomery County, Maryland. Lakkaraju also worked as a research intern and visiting researcher at Microsoft Research, Redmond during her PhD. She collaborated with Eric Horvitz at Microsoft Research to develop human-in-the-loop algorithms for identifying blind spots of machine learning models. Research and career Lakkaraju's doctoral research focused on developing and evaluating interpretable, transparent, and fair predictive models which can assist human decision makers (e.g., doctors, judges) in domains such as healthcare, criminal justice, and education. As part of her doctoral thesis, she developed algorithms for automatically constructing interpretable rules for classification and other complex decisions which involve trade-offs. Lakkaraju and her co-authors also highlighted the challenges associated with evaluating predictive models in settings with missing counterfactuals and unmeasured confounders, and developed new computational frameworks for addressing these challenges. She co-authored a study which demonstrated that when machine learning models are used to assist in making bail decisions, they can help reduce crime rates by up to 24.8% without exacerbating racial disparities. Lakkaraju joined Harvard University as a postdoctoral researcher in 2018, and then became an assistant professor at the Harvard Business School and the Department of Computer Science at Harvard University in 2020. Over the past few years, she has done pioneering work in the area of explainable machine learning. She initiated the study of adaptive and interactive post hoc explanations which can be used to explain the behavior of complex machine learning models in a manner that is tailored to user preferences. She and her collaborators also made one of the first attempts at identifying and formalizing the vulnerabilities of popular post hoc explanation methods. They demonstrated how adversaries can game popular explanation methods, and elicit explanations that hide undesirable biases (e.g., racial or gender biases) of the underlying models. Lakkaraju also co-authored a study which demonstrated that domain experts may not always interpret post hoc explanations correctly, and that adversaries could exploit post hoc explanations to manipulate experts into trusting and deploying biased models. She also worked on improving the reliability of explanation methods. She and her collaborators developed novel theory and methods to analyze and improve the robustness of different classes of post hoc explanation methods by proposing a unified theoretical framework and establishing the first known connections between explainability and adversarial training. Lakkaraju has also made important research contributions to the field of algorithmic recourse. She and her co-authors developed one of the first methods which allows decision makers to vet predictive models thoroughly to ensure that the recourse provided is meaningful and non-discriminatory. Her research has also highlighted critical flaws in several popular approaches in the literature of algorithmic recourse. Trustworthy ML Initiative (TrustML) In 2020, Lakkaraju co-founded the Trustworthy ML Initiative (TrustML) to democratize and promote research in the field of trustworthy machine learning which broadly encompasses interpretability, fairness, privacy, and robustness of machine learning models. This initiative aims to enable easy access of fundamental resources to newcomers in the field, provide a platform for early career researchers to showcase their work, and more broadly develop a community of researchers and practitioners working on topics related to trustworthy ML. Lakkaraju has developed several tutorials and a full-fledged course on explainable machine learning as part of this initiative. Awards and honors 2021 National Science Foundation – Amazon Fairness in AI grant 2020 Amazon Research Award 2019 MIT Technology Review Innovators Under 35 2019 Vanity Fair Future Innovators 2017 Microsoft Research Dissertation Grant 2017 INFORMS Best Data Mining Paper Prize 2016 Carnegie Mellon University Rising Stars in Electrical Engineering and Computer Science 2015 Google Anita Borg Scholarship 2013 Stanford Graduate Fellowship 2011 Best Research Paper Award, SIAM International Conference on Data Mining External links A course on "Interpretability and Explainability in Machine Learning", 2019 NeurIPS conference tutorial on "Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities", 2020 AAAI conference tutorial on "Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities", 2021 CHIL conference tutorial on "Explainable ML: Understanding the Limits and Pushing the Boundaries", 2021 Selected publications References Living people Year of birth missing (living people) Indian Institute of Science alumni Stanford University alumni University of Chicago faculty Harvard Business School faculty American computer scientists American women scientists American women academics 21st-century American women
217315
https://en.wikipedia.org/wiki/Cray
Cray
Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world. Cray manufactures its products in part in Chippewa Falls, Wisconsin, where its founder, Seymour Cray, was born and raised. The company also has offices in Bloomington, Minnesota (which have been converted to Hewlett Packard Enterprise offices), and numerous other sales, service, engineering, and R&D locations around the world. The company's predecessor, Cray Research, Inc. (CRI), was founded in 1972 by computer designer Seymour Cray. Seymour Cray later formed Cray Computer Corporation (CCC) in 1989, which went bankrupt in 1995. Cray Research was acquired by Silicon Graphics (SGI) in 1996. Cray Inc. was formed in 2000 when Tera Computer Company purchased the Cray Research Inc. business from SGI and adopted the name of its acquisition. The company was acquired by Hewlett Packard Enterprise in 2019 for $1.3 billion. History Background: 1950 to 1972 Seymour Cray began working in the computing field in 1950 when he joined Engineering Research Associates (ERA) in Saint Paul, Minnesota. There, he helped to create the ERA 1103. ERA eventually became part of UNIVAC, and began to be phased out. He left the company in 1960, a few years after former ERA employees set up Control Data Corporation (CDC). He initially worked out of the CDC headquarters in Minneapolis, but grew upset by constant interruptions by managers. He eventually set up a lab at his home town in Chippewa Falls, Wisconsin, about 85 miles to the east. Cray had a string of successes at CDC, including the CDC 6600 and CDC 7600. Cray Research Inc. and Cray Computer Corporation: 1972 to 1996 When CDC ran into financial difficulties in the late 1960s, development funds for Cray's follow-on CDC 8600 became scarce. When he was told the project would have to be put "on hold" in 1972, Cray left to form his own company, Cray Research, Inc. Copying the previous arrangement, Cray kept the research and development facilities in Chippewa Falls, and put the business headquarters in Minneapolis. The company's first product, the Cray-1 supercomputer, was a major success because it was significantly faster than all other computers at the time. The first system was sold within a month for US$8.8 million. Seymour Cray continued working, this time on the Cray-2, though it only ended up being marginally faster than the Cray X-MP, developed by another team at the company. Cray soon left the CEO position to become an independent contractor. He started a new Very Large Scale Integration technology lab for the Cray-2 in Boulder, Colorado, Cray Laboratories, in 1979, which closed in 1982; undaunted, Cray later headed a similar spin-off in 1989, Cray Computer Corporation (CCC) in Colorado Springs, Colorado, where he worked on the Cray-3 project—the first attempt at major use of gallium arsenide (GaAs) semiconductors in computing. However, the changing political climate (collapse of the Warsaw Pact and the end of the Cold War) resulted in poor sales prospects. Ultimately, only one Cray-3 was delivered, and a number of follow-on designs were never completed. The company filed for bankruptcy in 1995. CCC's remains then became Cray's final corporation, SRC Computers, Inc. Cray Research continued development along a separate line of computers, originally with lead designer Steve Chen and the Cray X-MP. After Chen's departure, the Cray Y-MP, Cray C90 and Cray T90 were developed on the original Cray-1 architecture but achieved much greater performance via multiple additional processors, faster clocks, and wider vector pipes. The uncertainty of the Cray-2 project gave rise to a number of Cray-object-code compatible "Crayette" firms: Scientific Computer Systems (SCS), American Supercomputer, Supertek, and perhaps one other firm. These firms did not mean to compete against Cray and therefore attempted less expensive, slower CMOS versions of the X-MP with the release of the COS operating system (SCS) and the CFT Fortran compiler; they also considered United States Department of Energy national laboratories (LANL/LLNL) developed Cray Time Sharing System operating system as well before joining the broader trend toward adoption of Unixes. A series of massively parallel computers from Thinking Machines Corporation, Kendall Square Research, Intel, nCUBE, MasPar and Meiko Scientific took over the 1980s high performance market. At first, Cray Research denigrated such approaches by complaining that developing software to effectively use the machines was difficult – a true complaint in the era of the ILLIAC IV, but becoming less so each day. Cray eventually realized that the approach was likely the only way forward and started a five-year project to capture the lead in this area: the plan's result was the Digital Equipment Corporation DEC Alpha-based Cray T3D and Cray T3E series, which left Cray as the only remaining supercomputer vendor in the market besides NEC's SX architecture by 2000. Most sites with a Cray installation were considered a member of the "exclusive club" of Cray operators. Cray computers were considered quite prestigious because Crays were extremely expensive machines, and the number of units sold was small compared to ordinary mainframes. This perception extended to countries as well: to boost the perception of exclusivity, Cray Research's marketing department had promotional neckties made with a mosaic of tiny national flags illustrating the "club of Cray-operating countries". New vendors introduced small supercomputers, known as minisupercomputers (as opposed to superminis) during the late 1980s and early 1990s, which out-competed low-end Cray machines in the market. The Convex Computer series, as well as a number of small-scale parallel machines from companies like Pyramid Technology and Alliant Computer Systems were particularly popular. One such vendor was Supertek, whose S-1 machine was an air-cooled CMOS implementation of the X-MP processor. Cray purchased Supertek in 1990 and sold the S-1 as the Cray XMS, but the machine proved problematic; meanwhile, the not-yet-completed S-2, a Y-MP clone, was later offered as the Cray Y-MP (later becoming the Cray EL90) which started to sell in reasonable numbers in 1991–92—to mostly smaller companies, notably in the oil exploration business. This line evolved into the Cray J90 and eventually the Cray SV1 in 1998. In December 1991, Cray purchased some of the assets of Floating Point Systems, another minisuper vendor that had moved into the file server market with its SPARC-based Model 500 line. These symmetric multiprocessing machines scaled up to 64 processors and ran a modified version of the Solaris operating system from Sun Microsystems. Cray set up Cray Research Superservers, Inc. (later the Cray Business Systems Division) to sell this system as the Cray S-MP, later replacing it with the Cray CS6400. In spite of these machines being some of the most powerful available when applied to appropriate workloads, Cray was never very successful in this market, possibly due to it being so foreign to its existing market niche. CCC was building the Cray-3/SSS when it went into Chapter 11 in March 1995. Silicon Graphics ownership: 1996 to 2000 Cray Research was acquired by Silicon Graphics (SGI) for $740 million in February 1996. In May 1996, SGI sold the Superservers business to Sun. Sun then turned the UltraSPARC-based Starfire project then under development into the extremely successful Sun Enterprise 10000 range of servers. SGI used several Cray technologies in its attempt to move from the graphics workstation market into supercomputing. Key among these was the use of the Cray-developed HIPPI computer bus and details of the interconnects used in the T3 series. SGI's long-term strategy was to merge its high-end server line with Cray's product lines in two phases, code-named SN1 and SN2 (SN standing for "Scalable Node"). The SN1 was intended to replace the T3E and SGI Origin 2000 systems and later became the SN-MIPS or SGI Origin 3000 architecture. The SN2 was originally intended to unify all high-end/supercomputer product lines including the T90 into a single architecture. This goal was never achieved before SGI divested itself of the Cray business, and the SN2 name was later associated with the SN-IA or SGI Altix 3000 architecture. In October 1996, Founder Seymour Cray died as a result of a traffic accident. Under SGI ownership, one new Cray model line, the Cray SV1, was launched in 1998. This was a clustered SMP vector processor architecture, developed from J90 technology. On March 2, 2000, Cray was sold to Tera Computer Company, which was renamed Cray Inc. Post-Tera merger: 2000 to 2019 After the Tera merger, the Tera MTA system was relaunched as the Cray MTA-2. This was not a commercial success and shipped to only two customers. Cray Inc. also unsuccessfully badged the NEC SX-6 supercomputer as the Cray SX-6 and acquired exclusive rights to sell the SX-6 in the U.S., Canada and Mexico. In 2002, Cray Inc. announced its first new model, the Cray X1 combined architecture vector processor / massively parallel supercomputer. Previously known as the SV2, the X1 is the end result of the earlier SN2 concept originated during the SGI years. In May 2004, Cray was announced to be one of the partners in the United States Department of Energy's fastest-computer-in-the-world project to build a 50 teraFlops machine for the Oak Ridge National Laboratory. Cray was sued in 2002 by Isothermal Systems Research for patent infringement. The suit claimed that Cray used ISR's patented technology in the development of the Cray X1. The lawsuit was settled in 2003. As of November 2004, the Cray X1 had a maximum measured performance of 5.9 teraflops, being the 29th fastest supercomputer in the world. Since then the X1 has been superseded by the X1E, with faster dual-core processors. On October 4, 2004, the company announced the Cray XD1 range of entry-level supercomputers which use dual-core 64-bit Advanced Micro Devices Opteron central processing units running Linux. This system was previously known as the OctigaBay 12K before Cray's acquisition of that company. The XD1 provided one Xilinx Virtex II Pro field-programmable gate array (FPGA) with each node of four Opteron processors. The FPGAs could be configured to embody various digital hardware designs and could augment the processing or input/output capabilities of the Opteron processors. Furthermore, each FPGA contains a pair of PowerPC 405 processors which can add to the already considerable power of a single node. The Cray XD1, although moderately successful, was eventually discontinued. In 2004, Cray completed the Red Storm system for Sandia National Laboratories. Red Storm was to become the jumping-off point for a string of successful products that eventually revitalized Cray in supercomputing. Red Storm had processors clustered in 96 unit cabinets, a theoretical maximum of 300 cabinets in a machine, and a design speed of 41.5 teraflops. Red Storm also included an innovative new design for network interconnects, which was dubbed SeaStar and destined to be the centerpiece of succeeding innovations by Cray. The Cray XT3 massively parallel supercomputer became a commercialized version of Red Storm, similar in many respects to the earlier T3E architecture, but, like the XD1, using AMD Opteron processors. The Cray XT4, introduced in 2006 added support for DDR2 memory, newer dual-core and future quad-core Opteron processors and utilized a second generation SeaStar2 communication coprocessor. It also included an option for FPGA chips to be plugged directly into processor sockets, unlike the Cray XD1, which required a dedicated socket for the FPGA coprocessor. On August 8, 2005, Peter Ungaro was appointed CEO. Ungaro had joined Cray in August 2003 as Vice President of Sales and Marketing and had been made Cray's President in March 2005. On November 13, 2006, Cray announced a new system, the Cray XMT, based on the MTA series of machines. This system combined multi-threaded processors, as used on the original Tera systems, and the SeaStar2 interconnect used by the XT4. By reusing ASICs, boards, cabinets, and system software used by the comparatively higher volume XT4 product, the cost of making the very specialized MTA system can be reduced. A second generation of the XMT is scheduled for release in 2011, with the first system ordered by the Swiss National Supercomputing Center (CSCS). In 2006, Cray announced a vision of products dubbed Adaptive Supercomputing. The first generation of such systems, dubbed the Rainier Project, used a common interconnect network (SeaStar2), programming environment, cabinet design, and I/O subsystem. These systems included the existing XT4 and the XMT. The second generation, launched as the XT5h, allowed a system to combine compute elements of various types into a common system, sharing infrastructure. The XT5h combined Opteron, vector, multithreaded, and FPGA compute processors in a single system. In April 2008, Cray and Intel announced they would collaborate on future supercomputer systems. This partnership produced the Cray CX1 system, launched in September the same year. This was a deskside blade server system, comprising up to 16 dual- or quad-core Intel Xeon processors, with either Microsoft Windows HPC Server 2008 or Red Hat Enterprise Linux installed. By 2009, the largest computer system Cray had delivered was the Cray XT5 system at National Center for Computational Sciences at Oak Ridge National Laboratories. This system, with over 224,000 processing cores, was dubbed Jaguar and was the fastest computer in the world as measured by the LINPACK benchmark at the speed of 1.75 petaflops until being surpassed by the Tianhe-1A in October 2010. It was the first system to exceed a sustained performance of 1 petaflops on a 64-bit scientific application. In May 2010, the Cray XE6 supercomputer was announced. The Cray XE6 system had at its core the new Gemini system interconnect. This new interconnect included a true global-address space and represented a return to the T3E feature set that had been so successful with Cray Research. This product was a successful follow-on to the XT3, XT4 and XT5 products. The first multi-cabinet XE6 system was shipped in July 2010. The next generation Cascade systems were designed make use of future multicore and/or manycore processors from vendors such as Intel and NVIDIA. Cascade was scheduled to be introduced in early 2013 and designed to use the next-generation network chip and follow-on to Gemini, code named Aries. In early 2010, Cray also introduced the Cray CX1000, a rack-mounted system with a choice of compute-based, GPU-based, or SMP-based chassis. The CX1 and CX1000 product lines were sold until late 2011. In 2011, Cray announced the Cray XK6 hybrid supercomputer. The Cray XK6 system, capable of scaling to 500,000 processors and 50 petaflops of peak performance, combines Cray's Gemini interconnect, AMD's multi-core scalar processors, and NVIDIA's Tesla GPGPU processors. In October 2012 Cray announced the Cray XK7 which supports the NVIDIA Kepler GPGPU and announced that the ORNL Jaguar system would be upgraded to an XK7 (renamed Titan) and capable of over 20 petaflops. Titan was the world's fastest supercomputer as measured by the LINPACK benchmark until the introduction of the Tianhe-2 in 2013, which is substantially faster. In 2011 Cray also announced it had been awarded the $188 million Blue Waters contract with the University of Illinois at Urbana–Champaign, after IBM had pulled out of the delivery. This system was delivered in 2012 and was the largest system to date, in terms of cabinets and general-purpose x86 processors, that Cray had ever delivered. In November 2011, the Cray Sonexion 1300 Data Storage System was introduced and signaled Cray's entry into the high performance storage business. This product used modular technology and a Lustre file system. In 2011, Cray launched the OpenACC parallel programming standard organization. However, in 2019 Cray announced that it was deprecating OpenACC, and will support OpenMP. In April 2012, Cray Inc. announced the sale of its interconnect hardware development program and related intellectual property to Intel Corporation for $140 million. On November 9, 2012, Cray announced the acquisition of Appro International, Inc., a California-based privately held developer of advanced scalable supercomputing solutions. Currently the #3 provider on the Top100 supercomputer list, Appro builds some of the world's most advanced high performance computing (HPC) cluster systems. In 2012, Cray opened a subsidiary in China. Subsidiary of Hewlett Packard Enterprise: 2019-present On September 25, 2019, Hewlett Packard Enterprise (HPE) acquired the company for $1.3 billion. In October 2020, HPE was awarded the contract to build the pre-exascale EuroHPC computer LUMI, in Kaajani, Finland. The contract, worth €144.5 million, is for an HPE Cray EX system, with a theoretical maximum performance of 550 petaflops. Once fully operational, LUMI will become one of the fastest supercomputers in the world. References Further reading External links Comprehensive site with details and history of machines, sales documents, Cray Channels magazine and FAQ notes Cray Manuals Library @ Computing History Cray Manuals at bitsavers.org Historic Cray Research Marketing Materials at the Computer History Museum Cray headquarters is at coordinates 2019 mergers and acquisitions Manufacturing companies based in Seattle Companies formerly listed on the Nasdaq Computer companies established in 1972 American companies established in 1972 1995 initial public offerings Computer companies of the United States Computer hardware companies Silicon Graphics Hewlett-Packard acquisitions Hewlett-Packard Enterprise acquisitions
2909591
https://en.wikipedia.org/wiki/IFolder
IFolder
iFolder is an open-source application, developed by Novell, Inc., intended to allow cross-platform file sharing across computer networks. iFolder operates on the concept of shared folders, where a folder is marked as shared and the contents of the folder are then synchronized to other computers over a network, either directly between computers in a peer-to-peer fashion or through a server. This is intended to allow a single user to synchronize files between different computers (for example between a work computer and a home computer) or share files with other users (for example a group of people who are collaborating on a project). The core of the iFolder is actually a project called Simias. It is Simias which actually monitors files for changes, synchronizes these changes and controls the access permissions on folders. The actual iFolder clients (including a graphical desktop client and a web client) are developed as separate programs that communicate with the Simias back-end. History Originally conceived and developed at PGSoft before the company was taken over by Novell in 2000, iFolder was announced by Novell on March 19, 2001, and released on June 29, 2001 as a software package for Windows NT/2000 and Novell NetWare 5.1 or included with the forthcoming Novell NetWare 6.0. It also included the ability to access shared files through a web browser. iFolder Professional Edition 2, announced on March 13, 2002 and released a month later, added support for Linux and Solaris and web access support for Windows CE and Palm OS. This edition was also designed to share files between millions of users in large companies, with increased reporting features for administrators. In 2003 iFolder won a Codie award. On March 22, 2004, after their purchase of the Linux software companies Ximian and SUSE, Novell announced that they were releasing iFolder as an open source project under the GPL license. They also announced that the open source version of iFolder would use the Mono framework in an effort to ease development. iFolder 3.0 was released on June 22, 2005. On March 31, 2006, Novell announced that iFolder Enterprise Server is now Open Source. On April 2, 2009, Novell released iFolder 3.7.2 which included a Mac client for 10.4 and 10.5 as well as a Windows Vista client. In addition to the improved client lineup this version supports SSL, LDAPGroup Support, Auto-account creation, iFolder Merge, and Enhanced web access and administration. The iFolder.com website has been completely redesigned with no references to the earlier versions. On Nov 25, 2009, Novell released iFolder 3.8 See also Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links Installation procedure for OpenSUSE. OpenSUSE is recommended because of its lineage to OES and SLES Installation procedure for Ubuntu and Debian Installation procedure for Ubuntu 11.04 based on official RPM packages Free software programmed in C Sharp Mono project applications Novell NetWare Free file sharing software Data synchronization Backup software File hosting for Linux File hosting for macOS File hosting for Windows
932784
https://en.wikipedia.org/wiki/Christopher%20Strachey
Christopher Strachey
Christopher S. Strachey (; 16 November 1916 - 18 May 1975) was a British computer scientist. He was one of the founders of denotational semantics, and a pioneer in programming language design and computer time-sharing. He has also been credited as possibly being the first developer of a video game. He was a member of the Strachey family, prominent in government, arts, administration, and academia. Early life and education Christopher Strachey was born on 16 November 1916 to Oliver Strachey and Rachel (Ray) Costelloe in Hampstead, England. Oliver Strachey was the son of Richard Strachey and the great grandson of Sir Henry Strachey, 1st Baronet. His elder sister was the writer Barbara Strachey. In 1919, the family moved to 51 Gordon Square. The Stracheys belonged to the Bloomsbury Group whose members included Virginia Woolf, John Maynard Keynes and Christopher's uncle Lytton Strachey. At 13, Christopher went to Gresham's School, Holt where he showed signs of brilliance but in general performed poorly. He was admitted to King's College, Cambridge (the same college as Alan Turing) in 1935 where he continued to neglect his studies. Strachey studied mathematics and then transferred to physics. At the end of his third year at Cambridge, Strachey suffered a nervous breakdown, possibly related to coming to terms with his homosexuality. He returned to Cambridge but managed only a "lower second" in the Natural Sciences Tripos. Career Unable to continue his education, Christopher joined Standard Telephones and Cables (STC) as a research physicist. His first job was providing mathematical analysis for the design of electron tubes used in radar. The complexity of the calculations required the use of a differential analyser. This initial experience with a computing machine sparked Strachey's interest and he began to research the topic. An application for a research degree at the University of Cambridge was rejected and Strachey continued to work at STC throughout the Second World War. After the war he fulfilled a long-standing ambition by becoming a schoolmaster at St Edmund's School, Canterbury, teaching mathematics and physics. Three years later he was able to move to the more prestigious Harrow School in 1949, where he stayed for three years. In January 1951, a friend introduced him to Mike Woodger of the National Physical Laboratory (NPL). The lab had successfully built a reduced version of Alan Turing's Automatic Computing Engine (ACE) the concept of which dated from 1945: the Pilot ACE. In his spare time Strachey developed a program for the game of draughts (also known as "checkers"), which he finished a preliminary version in May 1951. The game completely exhausted the Pilot ACE's memory. The draughts program tried to run for the first time on 30 July 1951 at NPL, but was unsuccessful due to program errors. When Strachey heard about the Manchester Mark 1, which had a much bigger memory, he asked his former fellow-student Alan Turing for the manual and transcribed his program into the operation codes of that machine by around October 1951. By the summer of 1952, the program could "play a complete game of Draughts at a reasonable speed". While he did not give this game – which may have been the first video game – a name, Noah Wardrip-Fruin named it "M. U. C. Draughts." Strachey programmed the first ever music performed by a computer; a rendition of the British National Anthem "God Save the Queen" on the Mark II Manchester Electronic Computer at Manchester, in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: "God Save the Queen", "Baa, Baa, Black Sheep", and "In the Mood". Researchers at the University of Canterbury, Christchurch restored the acetate master disc in 2016 and the results may be heard on SoundCloud. In May 1952, Strachey gave a two-part talk on "the study of control in animals and machines" ("cybernetics") for the BBC Home Service's Science Survey programme. Strachey worked for the National Research Development Corporation (NRDC) from 1952 to 1959. While working on the St. Lawrence Seaway project, he was able to visit several computer centres in the United States and catalogue their instruction sets. Later, he worked on programming both the Elliott 401 computer and the Ferranti Pegasus computer. Together with Donald B. Gillies, he filed three patents in computing design including the design of base registers for program relocation. He also worked on the analysis of vibration in aircraft, working briefly with Roger Penrose. In 1959, Strachey left NRDC to become a computer consultant working for NRDC, EMI, Ferranti and other organisations on a number of wide-ranging projects. This work included logical design for computers, providing autocode and later the design of high-level programming languages. For a contract to produce the autocode for the Ferranti Orion computer, Strachey hired Peter Landin who became his one assistant for the duration of Strachey's consulting period. In 1962, while remaining a consultant, he accepted a position at Cambridge University. In 1965, Strachey accepted a position at Oxford University as the first director of the Programming Research Group and later the university's first professor of computer science and fellow of Wolfson College, Oxford. He collaborated with Dana Scott. Strachey was elected as a distinguished fellow of the British Computer Society in 1971 for his pioneering work in computer science. In 1973, Strachey (along with Robert Milne) began to write an essay submitted to the Adams Prize competition, after which they continued work to revising it into book form. Strachey can be seen and heard in the recorded Lighthill debate on AI (see Lighthill report). Strachey contracted an illness diagnosed as jaundice which, after a period of seeming recovery, returned and he died of infectious hepatitis on 18 May 1975. Work Strachey developed the concept of time-sharing in 1959. He filed a patent application in February that year and gave a paper "Time Sharing in Large Fast Computers" at the inaugural UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. This paper is credited by the MIT Computation Center in 1963 as "the first paper on time-shared computers". He developed the Combined Programming Language (CPL). His influential set of lecture notes Fundamental Concepts in Programming Languages formalised the distinction between L- and R- values (as seen in the C programming language). Strachey also coined the term currying, although he did not invent the underlying concept. He was instrumental in the design of the Ferranti Pegasus computer. He was a pioneer of early video games creating a version of draughts for the Ferranti Mark 1. The macro language m4 derives much from Strachey's GPM (General Purpose Macrogenerator), one of the earliest macro expansion languages. Legacy The Department of Computer Science at the University of Oxford has a Christopher Strachey Professorship of Computing, currently held by Samson Abramsky FRS. In November 2016, a Strachey 100 event was held at Oxford University to celebrate the centenary of Strachey's birth, including a viewing at the Weston Library in Oxford of the Christopher Strachey archive held in the Bodleian Library collection. Publications Also: Also: Also: Also: References Further reading Copeland, B.J. A Brief History of Computing, AlanTuring.net, June 2000. Lavington, S. The Pegasus Story, Science Museum, 2000. . External links at the Virtual Museum of Computing A simulator of the Manchester Mark 1, executing Christopher Strachey's Love letter algorithm from 1952 A web based version of Christopher Strachey's Love letter algorithm showing word lists Higher-Order and Symbolic Computation Volume 13, Issue 1/2 (April 2000) Special Issue in memory of Christopher Strachey "Pioneer Profiles – Christopher Strachey" in Resurrection. The Bulletin of the Computer Conservation Society. Number 43. Summer 2008. ISSN 0958-7403. Supplementary Strachey Papers held at the British Library 1916 births 1975 deaths People from Hampstead People educated at Gresham's School Alumni of King's College, Cambridge English computer scientists History of computing in the United Kingdom Members of the Department of Computer Science, University of Oxford Fellows of Wolfson College, Oxford Programming language researchers Programming language designers British computer programmers Schoolteachers from London Formal methods people Fellows of the British Computer Society Christopher LGBT scientists from the United Kingdom LGBT academics Deaths from hepatitis LGBT mathematicians 20th-century LGBT people Schoolteachers from Kent
164185
https://en.wikipedia.org/wiki/Brutus%20of%20Troy
Brutus of Troy
Brutus, or Brute of Troy, is a legendary descendant of the Trojan hero Aeneas, known in medieval British history as the eponymous founder and first king of Britain. This legend first appears in the Historia Brittonum, an anonymous 9th-century historical compilation to which commentary was added by Nennius, but is best known from the account given by the 12th-century chronicler Geoffrey of Monmouth in his Historia Regum Britanniae. Historia Brittonum Some have suggested that attributing the origin of 'Britain' to the Latin 'Brutus' may be ultimately derived from Isidore of Seville's popular 7th-century work Etymologiae, in which it was speculated that the name of Britain comes from bruti, on the basis that the Britons were, in the eyes of that author, brutes, or savages. A more detailed story, set before the foundation of Rome, follows, in which Brutus is the grandson or great grandson of Aeneas – a legend that was perhaps inspired by Isidore's spurious etymology and blends it with the Christian, pseudo-historical, "Frankish Table of Nations" tradition that emerged in the early medieval European scholarly world (actually of 6th century AD Byzantine origin, and not Frankish, according to historian Walter Goffart) and attempted to trace the peoples of the known world (as well as legendary figures, such as the Trojan house of Aeneas) back to biblical ancestors. Supposedly following Roman sources such as Livy and Virgil, the Historia tells how Aeneas settled in Italy after the Trojan War, and how his son Ascanius founded Alba Longa, one of the precursors of Rome. Ascanius married, and his wife became pregnant. In a variant version, the father is Silvius, who is identified as either the second son of Aeneas, previously mentioned in the Historia, or as the son of Ascanius. A magician, asked to predict the child's future, said it would be a boy and that he would be the bravest and most beloved in Italy. Enraged, Ascanius had the magician put to death. The mother died in childbirth. The boy, named Brutus, later accidentally killed his father with an arrow and was banished from Italy. After wandering among the islands of the Tyrrhenian Sea and through Gaul, where he founded the city of Tours, Brutus eventually came to Britain, named it after himself, and filled it with his descendants. His reign is synchronised to the time the High Priest Eli was judge in Israel, and when the Ark of the Covenant was taken by the Philistines. A variant version of the Historia Brittonum makes Brutus the son of Ascanius's son Silvius, and traces his genealogy back to Ham, son of Noah. Another chapter traces Brutus's genealogy differently, making him the great-grandson of the legendary Roman king Numa Pompilius, who was himself a son of Ascanius, and tracing his descent from Noah's son Japheth. These Christianising traditions conflict with the classical Trojan genealogies, relating the Trojan royal family to Greek gods. Yet another Brutus, son of Hisicion, son of Alanus the first European, also traced back across many generations to Japheth, is referred to in the Historia Brittonum. This Brutus's brothers were Francus, Alamanus and Romanus, also ancestors of significant European nations. Historia Regum Britanniae Geoffrey of Monmouth's account tells much the same story, but in greater detail. In this version, Brutus is explicitly the grandson, rather than son, of Ascanius; his father is Ascanius' son Silvius. The magician who predicts great things for the unborn Brutus also foretells he will kill both his parents. He does so, in the same manner described in the Historia Brittonum, and is banished. Travelling to Greece, he discovers a group of Trojans enslaved there. He becomes their leader, and after a series of battles they defeat the Greek king Pandrasus by attacking his camp at night after capturing the guards. He takes him hostage and forces him to let his people go. He is given Pandrasus's daughter Ignoge or Innogen in marriage, and ships and provisions for the voyage, and sets sail. The Trojans land on a deserted island and discover an abandoned temple to Diana. After performing the appropriate ritual, Brutus falls asleep in front of the goddess's statue and is given a vision of the land where he is destined to settle, an island in the western ocean inhabited only by a few giants. After some adventures in north Africa and a close encounter with the Sirens, Brutus discovers another group of exiled Trojans living on the shores of the Tyrrhenian Sea, led by the prodigious warrior Corineus. In Gaul, Corineus provokes a war with Goffarius Pictus, king of Aquitaine, after hunting in the king's forests without permission. Brutus's nephew Turonus dies in the fighting, and the city of Tours is founded where he is buried. The Trojans win most of their battles but are conscious that the Gauls have the advantage of numbers, so go back to their ships and sail for Britain, then called Albion. They land on ""—"the sea-coast of Totnes". They meet the giant descendants of Albion and defeat them. Brutus renames the island after himself and becomes its first king. Corineus becomes ruler of Cornwall, which is named after him. They are harassed by the giants during a festival, but kill all of them but their leader, the largest giant Goemagot, who is saved for a wrestling match against Corineus. Corineus throws him over a cliff to his death. Brutus then founds a city on the banks of the River Thames, which he calls Troia Nova, or New Troy. The name is in time corrupted to Trinovantum, and the city is later called London. He creates laws for his people and rules for twenty-four years. After his death he is buried in Trinovantum, and the island is divided between his three sons: Locrinus (England), Albanactus (Scotland) and Kamber (Wales). Legacy Early translations and adaptations of Geoffrey's Historia, such as Wace's Norman French Roman de Brut, Layamon's Middle English Brut, were named after Brutus, and the word brut came to mean a chronicle of British history. One of several Middle Welsh adaptations was called the Brut y Brenhinedd ("Chronicle of the Kings"). Brut y Tywysogion ("Chronicle of the Princes"), a major chronicle for the Welsh rulers from the 7th century to loss of independence, is a purely historical work containing no legendary material but the title reflects the influence of Geoffrey's work and, in one sense, can be seen as a "sequel" to it. Early chroniclers of Britain, such as Alfred of Beverley, Nicholas Trivet and Giraldus Cambrensis began their histories of Britain with Brutus. The foundation myth of Brutus having settled in Britain was still considered as genuine history during the Early Modern Period, for example Holinshed's Chronicles (1577) considers the Brutus myth to be factual. The 18th-century English poet Hildebrand Jacob wrote an epic poem, Brutus the Trojan, Founder of the British Empire, about him, following in the tradition of the Roman foundation epic the Aeneid. Brutus is an important character in the book series The Troy Game by Sara Douglass. Geoffrey's Historia says that Brutus and his followers landed at Totnes in Devon. A stone on Fore Street in Totnes, known as the "Brutus Stone", commemorates this. In 2021, the Totnes community radio station Soundart Radio commissioned a radio drama adaptation of the Brutus myth by the writer Will Kemp. Notes References Translation of Historia Brittonum from J. A. Giles, Six Old English Chronicles, London: Henry G. Bohn 1848. Full text from Fordham University. John Morris (ed), Nennius: Arthurian Period Sources Vol 8, Phillimore, 1980 Geoffrey of Monmouth, The History of the Kings of Britain, translated by Lewis Thorpe, Penguin, 1966 Henry Lewis (ed.), Brut Dingestow (University of Wales Press, 1942). The best-known Middle Welsh adaptation. Original text with introduction and notes, in Welsh. The British History of Geoffrey of Monmouth, translated by Aaron Thompson, revised and corrected by J. A. Giles, 1842 Bulfinch's Mythology External links British folklore British traditional history Characters in works by Geoffrey of Monmouth Founding monarchs Mythological city founders Japheth Welsh folklore Totnes Legendary British kings
33797919
https://en.wikipedia.org/wiki/2011%20Hong%20Kong%20Election%20Committee%20Subsector%20elections
2011 Hong Kong Election Committee Subsector elections
The 2011 Election Committee subsector elections took place between 7:30 am and 10:30 pm on 11 December 2011. The Election Committee sub-sector elections are a part of the contemporary political process of Hong Kong. The election's purpose is to decide the 1,044 members of the Election Committee of Hong Kong. The resulting Election Committee is then responsible for electing the Chief Executive of Hong Kong Special Administrative Region (SAR) in the 2012 Election. Background The breakthrough of the electoral reform in 2010 changed the membership of the Election Committee for the first time which expanded the size of the Election Committee from 800 members to 1,200 members. Each sector were allocated 100 more seats proportionally and the 10 Special Members were elected to fill the vacancy of the 10 new ex officio members Legislative Council which was also expanded from 60 to 70 seats in the electoral reform but was to be elected in the following September election. The Special Members were 4 in the Chinese People's Political Consultative Conference sub-sector and 2 in the Heung Yee Kuk, the Hong Kong and Kowloon District Councils, and the New Territories District Councils respectively. Composition The Election Committee consisted of 1,044 [1,034] members elected from 35 subsectors, 60 members nominated by the Religious sub-sector and 96 [106] ex officio members. (Hong Kong deputies from the National People's Congress and Legislative Council of Hong Kong members). As the term of office commenced on 1 February 2012, the 1,200 member Election Committee was formed by 38 Election Committee Sub-sectors: Heung Yee Kuk (28) [26] Agriculture and Fisheries (60) Insurance (18) Transport (18) Education (30) Legal (30) Accountancy (30) Medical (30) Health Services (30) Engineering (30) Architectural, Surveying and Planning (30) Labour (60) Social Welfare (60) Real Estate and Construction (18) Tourism (18) Commercial (First) (18) Commercial (Second) (18) Industrial (First) (18) Industrial (Second) (18) Finance (18) Financial Services (18) Sports, Performing Arts, Culture and Publication (60) Import and Export (18) Textiles and Garment (18) Wholesale and Retail (18) Information Technology (30) Higher Education (30) Hotel (17) Catering (17) Chinese Medicine (30) Chinese People's Political Consultative Conference (55) [51] Employers' Federation of HK (16) HK and Kowloon District Councils (59) [57] New Territories District Councils (62) HK Chinese Enterprises Association (16) National People's Congress (36) Legislative Council (60) [70] Religious (60) Note: Figures in brackets denotes the number of members and figures in square brackets denotes the number of members commencing in October 2012. Number of members nominated by the six designated bodies of the religious sub-sector: Catholic Diocese of Hong Kong (10 members) Chinese Muslim Cultural and Fraternal Association (10 members) Hong Kong Christian Council (10 members) The Hong Kong Taoist Association (10 members) The Confucian Academy (10 members) The Hong Kong Buddhist Association (10 members) Nominations The nomination period for the elections was between 8 and 15 November 2011 (The Hong Kong and Kowloon District Councils, and the New Territories District Councils Sub-sectors had a nomination period between 18 and 24 November 2011). Forums Two candidate forums were arranged for all candidates and each forum was divided into two sessions. Candidates numbered 1–21 attended the first session on 23 October 2011, whilst candidates numbered 22–42 attended the second session on the 20 and 23 October 2011. Election results Results by subsectors Statistics are generated from the official election website: Result by affiliations The election results are generated from the official election website. The political affiliations are according to the candidate's self-proclaimed affiliations shown on the election platforms, as well as from the news. |- ! style="background-color:#E9E9E9;text-align:center;" colspan=3 rowspan=2 |Affiliation ! style="background-color:#E9E9E9;text-align:center;" colspan=2 |1st Sector ! style="background-color:#E9E9E9;text-align:center;" colspan=2 |2nd Sector ! style="background-color:#E9E9E9;text-align:center;" colspan=2 |3rd Sector ! style="background-color:#E9E9E9;text-align:center;" colspan=2 |4th Sector ! style="background-color:#E9E9E9;text-align:center;" colspan=2 |Total |- ! style="background-color:#E9E9E9;text-align:right;" |Standing ! style="background-color:#E9E9E9;text-align:right;" |Elected ! style="background-color:#E9E9E9;text-align:right;" |Standing ! style="background-color:#E9E9E9;text-align:right;" |Elected ! style="background-color:#E9E9E9;text-align:right;" |Standing ! style="background-color:#E9E9E9;text-align:right;" |Elected ! style="background-color:#E9E9E9;text-align:right;" |Standing ! style="background-color:#E9E9E9;text-align:right;" |Elected ! style="background-color:#E9E9E9;text-align:right;" |Standing ! style="background-color:#E9E9E9;text-align:right;" |Elected |- |style="background-color:Pink" rowspan="20" | | width=1px style="background-color: " | | style="text-align:left;" |Democratic Alliance for the Betterment and Progress of Hong Kong |12 |10 |2 |2 |9 |5 |61 |61 |84 |78 |- | width=1px style="background-color: " | | style="text-align:left;" |Liberal Party |14 |13 |2 |1 |colspan=2|– |5 |5 |21 |19 |- | width=1px style="background-color: " | | style="text-align:left;" |A16 Alliance |colspan=2|– |16 |15 |colspan=2|– |colspan=2|– |16 |15 |- | width=1px style="background-color: " | | style="text-align:left;" |ICT Energy |colspan=2|– |24 |8 |colspan=2|– |colspan=2|– |24 |8 |- |style="background-color: "| | style="text-align:left;" | Federation of Hong Kong and Kowloon Labour Unions |colspan=2|– |colspan=2|– |10 |8 |colspan=2|– |10 |8 |- |style="background-color: "| | style="text-align:left;" | Hong Kong Federation of Trade Unions |2 |1 |colspan=2|– |2 |2 |3 |3 |7 |6 |- | width=1px style="background-color: " | | style="text-align:left;" |Civil Force |colspan=2|– |colspan=2|– |colspan=2|– |5 |5 |5 |5 |- | width=1px style="background-color: " | | style="text-align:left;" |New Territories Association of Societies |colspan=2|– |colspan=2|– |colspan=2|– |4 |4 |4 |4 |- | width=1px style="background-color: " | | style="text-align:left;" |Education Convergence |colspan=2|– |6 |3 |colspan=2|– |colspan=2|– |6 |3 |- | width=1px style="background-color: " | | style="text-align:left;" |New People's Party |colspan=2|– |1 |1 |colspan=2|– |2 |2 |3 |3 |- | width=1px style="background-color: " | | style="text-align:left;" |Action 9 |colspan=2|– |9 |2 |colspan=2|– |colspan=2|– |9 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Your Vote Counts |colspan=2|– |6 |2 |colspan=2|– |colspan=2|– |6 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Y5 Give Me Five |colspan=2|– |5 |2 |colspan=2|– |colspan=2|– |5 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Federation of Education Workers |colspan=2|– |3 |1 |colspan=2|– |colspan=2|– |3 |1 |- | width=1px style="background-color: " | | style="text-align:left;" |Government Disciplined Services General Union |colspan=2|– |colspan=2|– |2 |1 |colspan=2|– |2 |1 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Women Teachers' Organization |colspan=2|– |2 |1 |colspan=2|– |colspan=2|– |2 |1 |- | width=1px style="background-color: " | | style="text-align:left;" |New Century Forum |1 |1 |colspan=2|– |colspan=2|– |colspan=2|– |1 |1 |- | width=1px style="background-color: " | | style="text-align:left;" |Welfare Empower Hong Kong |colspan=2|– |colspan=2|– |23 |0 |colspan=2|– |23 |0 |- | width=1px style="background-color: " | | style="text-align:left;" |Vox Pop |colspan=2|– |12 |0 |colspan=2|– |colspan=2|– |12 |0 |- | width=1px style="background-color: " | | style="text-align:left;" |Estimated pro-Beijing individuals and others |317 |275 |386 |131 |245 |165 |130 |124 |1,078 |695 |- style="background-color:Pink" | colspan=3 style="text-align:left;" | Total for pro-Beijing camp || 346 || 300 || 484 || 177 || 281 || 173 || 210 || 204 || 1,321 || 854 |- |style="background-color:LightGreen" rowspan="15" | | width=1px style="background-color: " | | style="text-align:left;" |Demo-Social 60 |colspan=2|– |colspan=2|– |31 |29 |colspan=2|– |31 |29 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Social Workers' General Union |colspan=2|– |colspan=2|– |29 |28 |colspan=2|– |29 |28 |- | width=1px style="background-color: " | | style="text-align:left;" |ProDem22 |colspan=2|– |22 |22 |colspan=2|– |colspan=2|– |22 |22 |- | width=1px style="background-color: " | | style="text-align:left;" |IT Voice 2012 |colspan=2|– |19 |19 |colspan=2|– |colspan=2|– |19 |19 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Professional Teachers' Union |colspan=2|– |17 |17 |colspan=2|– |colspan=2|– |17 |17 |- | width=1px style="background-color: " | | style="text-align:left;" |Democratic Party |colspan=2|– |18 |14 |colspan=2|– |5 |0 |23 |14 |- | width=1px style="background-color: " | | style="text-align:left;" |Civic Party |colspan=2|– |14 |13 |colspan=2|– |1 |0 |15 |13 |- | width=1px style="background-color: " | | style="text-align:left;" |Academics In Support of Democracy |colspan=2|– |13 |13 |colspan=2|– |colspan=2|– |13 |13 |- | width=1px style="background-color: " | | style="text-align:left;" |Democratic Accountants |colspan=2|– |9 |9 |colspan=2|– |colspan=2|– |9 |9 |- | width=1px style="background-color: " | | style="text-align:left;" |Progressive Social Work |colspan=2|– |colspan=2|– |8 |2 |colspan=2|– |8 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Association for Democracy and People's Livelihood |colspan=2|– |2 |2 |colspan=2|– |colspan=2|– |2 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Neo Democrats |colspan=2|– |2 |2 |colspan=2|– |colspan=2|– |2 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Engineers for Universal Suffrage |colspan=2|– |6 |1 |colspan=2|– |colspan=2|– |6 |1 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Chinese Medicine Practitioners' Rights General Union |colspan=2|– |11 |0 |colspan=2|– |colspan=2|– |11 |0 |- | width=1px style="background-color: " | | style="text-align:left;" |Pro-democratic individuals and others |3 |0 |2 |2 |colspan=2|– |colspan=2|– |5 |2 |- style="background-color:LightGreen" | colspan=3 style="text-align:left;" | Total for pro-democracy camp || 3 || 0 || 135 || 114 || 68 || 59 || 6 || 0 || 212 || 173 |- | width=1px style="background-color: " rowspan=4 | | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Medical Association |colspan=2|– |30 |15 |colspan=2|– |colspan=2|– |30 |15 |- | width=1px style="background-color: " | | style="text-align:left;" |Public Surgeons' United |colspan=2|– |8 |2 |colspan=2|– |colspan=2|– |8 |2 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong Public Doctors' Association |colspan=2|– |4 |0 |colspan=2|– |colspan=2|– |4 |0 |- | width=1px style="background-color: " | | style="text-align:left;" |Hong Kong and Kowloon Trades Union Council |colspan=2|– |colspan=2|– |3 |0 |colspan=2|– |3 |0 |- |style="text-align:left;background-color:#E9E9E9" colspan="3"|Total (turnout 27.60%) |style="text-align:right;background-color:#E9E9E9"|349 |style="text-align:right;background-color:#E9E9E9"|300 |style="text-align:right;background-color:#E9E9E9"|651 |style="text-align:right;background-color:#E9E9E9"|300 |style="text-align:right;background-color:#E9E9E9"|362 |style="text-align:right;background-color:#E9E9E9"|240 |style="text-align:right;background-color:#E9E9E9"|216 |style="text-align:right;background-color:#E9E9E9"|204 |style="text-align:right;background-color:#E9E9E9"|1,578 |style="text-align:right;background-color:#E9E9E9"|1,044 |} Overview of outcome There were total of 11 subsectors being uncontested, most of them are in the First Sector where the business interests are rooted. Nevertheless, the election became much more competitive as supporters of both Henry Tang and Leung Chun-ying, the two potential candidates for the 2012 Chief Executive race, tried to gain as much seats. The pan-democracy camp secured the 150-member threshold to nominate a candidate to challenge to pro-Beijing dominated Chief Executive election in the following year. Catering sub-sector The Catering sub-sector was contested by two candidate lists the Cater17, led by the Catering Legislative Councillor Tommy Cheung Yu-yan and considered Henry Tang's supporters, and 星火行動, led by Simon Wong Ka-wo and considered Leung Chun-ying's supporters. Total of 34 candidates from the two lists equally contested for 17 seats. The Cater17 list won all 17 seats. Notable elected candidates include Allan Zeman, chairman of the Ocean Park Hong Kong. Accountancy sub-sector Many groups contested in the Accountancy sub-sector. The A16 Alliance was formed by accountants from the Big Four firms, the group was considered as Henry Tang's camp. 15 of the 16 candidates were elected with Eric Li got the highest votes. Two groups called "Your Vote Counts" and "Y5 Give Me Five" were supported by member of the Leung Chun-ying's election campaign office and Accountancy Legislative Councillor Paul Chan. "Your Vote Counts" got two of the six candidates elected and "Y5 Give Me Five" got two of the five. The Action 9 group was formed with 9 candidates with the election platform of increasing supply of public housing, solving the disparity between the rich and poor and implementing universal suffrage. The group stated that they would not rule out to nominate a pan democrat candidate but also said they were open on the idea of nominating Leung Chun-ying. The group got two members elected. 9 candidates from the pan-democracy were all elected to the Accountancy sub-sector. Chinese Medicine sub-sector The Hong Kong Chinese Medicine Practitioners' Rights General Union challenged the pro-Beijing groups' dominance in the Chinese Medicine sub-sector but all 11 candidates failed to get elected. Education sub-sector The Education sub-sector has been the stronghold of pan-democracy camp. The pro-democratic Hong Kong Professional Teachers' Union put out a 25-candidate list for the 30 seats in the sub-sector in which five of them are also the Democratic Party members. All candidates were elected including Ip Kin-yuen, who won the Education functional constituency seat in the Legislative Council election in the following year. The pro-Beijing Hong Kong Federation of Education Workers got only one seat. The Education Convergence had 6 candidates and 3 of them were also elected. One of the two candidates from the Hong Kong Women Teachers' Organization were also elected. Engineering sub-sector Sponsored by the Professional Commons, the pro-democratic group "Engineers for Universal Suffrage" (E4US) put out an 8-candidate list in which two of them are Civic Party members including Albert Lai. Only Albert Lai and one other pan democrat were elected. Other elected members included Lo Wai-kwok who won the Engineering functional constituency in the Legislative Council in September 2012, and Mak Chai-kwong who was appointed Secretary for Development in July 2012 by Leung Chun-ying. Higher Education sub-sector In succession to the 15 candidates group "Academics In Support of Democracy" in 2006 for the previous Election Committee sub-sector election, the group had 24 candidates running for 30 seats in this election. Many of them are with party membership such as Joseph Cheng Yu-shek, Kenneth Chan Ka-lok, and Kuan Hsin-chi are from the Civic Party, Helena Wong Pik-wan from the Democratic Party, and two from the Neo Democrats including Chan King-ming. Information Technology sub-sector IT Voice 2012 is an election coalition for the Information Technology sub-sector election formed by a group of pan democrats including Sin Chung-kai and Charles Mok. All 20 candidates were elected. The pro-Beijing ICT Energy including DAB member Elizabeth Quat got only 8 of the 24 members elected. The Other two elected candidates without affiliation included Ricky Wong. Legal sub-sector 22 pro-democratic independent candidates led by former chairman of the Hong Kong Bar Association Edward Chan King-sang formed the "ProDem22" and 8 candidates from the Democratic Party, Civic Party, and the Association for Democracy and People's Livelihood formed the "PanDem8" while the vice-chairman of the Law Society of Hong Kong Ambrose Lam San-keung led another 12 candidates group "Vox Pop" which was considered pro-Beijing. The 30 pan democrat candidates were able to collect all seats while "Vox Pop" failed to get any seat. Medical sub-sector The Medical sub-sector was the most competitive subsector in the election with total 83 candidates running for 30 seats. The Hong Kong Medical Association filled in 30 candidates and half of them got elected. A list led by Ko Wing-man got 2 of the seven candidates elected. The Public Surgeons' United also got 2 of the 8 candidates elected while the Hong Kong Public Doctors' Association failed to win a seat. The 5 members pro-democratic group won 2 seats including the former Legislative Councillor for the Medical functional constituency Kwok Ka-ki. Religious sub-sector 10 Election committee members are nominated by the Hong Kong Christian Council (HKCC), which was enlisted as the designated body for the Christian (Protestant) Sub-subsector. HKCC decided to adopt the “one Christian, one vote” method. The voting date was 30 October 2011. A total of 42 candidates were nominated. 17,380 of the 18,051 votes were counted as effective while there were 554 void votes and 117 blank votes. The result was considered as a landslide victory of the pro-Beijing faction. Social Welfare sub-sector The Social Welfare sub-sector was another stronghold of the pan-democracy camp. The Demo-Social 60 filled in 31 candidates in which many of them are the Democratic Party members such as Law Chi-kwong and Yeung Sum. 29 candidates were elected. The pro-democratic social workers' union Hong Kong Social Workers' General Union also had 29 candidates in which 28 of them were elected. A smaller pro-democratic group Progressive Social Work also had 2 of the 8 candidates elected. The pro-Beijing Welfare Empower Hong Kong failed to win any seat. District Councils sub-sectors The pro-Beijing camp won a landslide victory in the Hong Kong and Kowloon District Councils sub-sector and the New Territories District Councils sub-sector following the major success in the District Council elections in November. The DAB became the largest winner with 55 seats, 26 in the Hong Kong and Kowloon District Councils sub-sector and 29 in the New Territories District Councils sub-sector. The pan-democracy candidates list failed to win any seat. See also 2012 Hong Kong Chief Executive election Politics of Hong Kong References E E E 2011 elections in China E December 2011 events in Asia
17047581
https://en.wikipedia.org/wiki/Ourproject.org
Ourproject.org
OurProject.org (OP) is a web-based collaborative free content repository. It acts as a central location for the construction and maintenance of social/cultural/artistic projects, providing web space and tools, and focusing in free knowledge. It claims to extend the ideas and methodology of free software to social areas and free culture in general. Since September 2009, Ourproject is under the Comunes Association umbrella, and gave birth to the Kune collaborative social network for groups. Philosophy Ourproject was founded in 2002 with the aims of hosting and boosting the cooperative work done in multiple domains (cultural, artistic, educational), with one specific condition: the results of the projects should remain freely accessible under a free license. This is understood in a broad way, as not all the available licenses are cataloged as free/libre (such as several Creative Commons licenses). Its non-profit perspective is partially imposed on its online community of projects, as no advertising is allowed in the hosted webpages. Thus, OurProject projects have mainly been carried out by social movements, university-supported groups, some free software related projects, cooperatives, artist collectives, activist groups, informal groups and non-profits. Current situation/Condition As of December 2016, OurProject.org was hosting 1,733 projects and had 5,969 users, with a constant linear growth rate. and had a PageRank of 6 and the 14th position on public Gforge sites, being the first of them not restricted to just free software projects. In fact, the GNU Project highlights it as "Free knowledge & free culture" project. It claims to be the most successful wiki farm with full free version and without ads (comparison). It has presence especially in Spain, Latin America and China. Relevant projects that are using Ourproject infrastructure include: the Critical Mass movement pages reference for the Hispanic movement The Spanish cooperative for Organic farming "Bajo el Asfalto está la Huerta", reference in the Food sovereignty movement the Kune project, a federated collaborative social network the main Free Software association of Argentina The community website of the Ubuntu Spanish community the internal work group of the P2P Foundation. During 2011, it has attracted interest by the new wave of protesting social movements in several areas: It has hosted several projects related to the Arab Spring movements from Lebanon, Palestine and Syria. The social movement of 2011 Spanish protests repeatedly used its services. The American Occupy movement is recommending its use in its howto and it recently began being used. Software used OurProject.org uses a multi-language multiple-topic adapted version of FusionForge. Its aim is to widen the spectrum of free software ideals, focusing in free social and cultural projects more than in Free Software. Thus, its software was originally a kind of social and multiple-topic forge following the free-culture movement. Nowadays, the Ourproject community has been involved in the development of the collaborative environment Kune and eventually this would cover all current Ourproject functionalities. Licenses allowed The main condition for hosting projects at OP is that the content created during the project must be released under one of these licenses: Creative Commons, Attribution Creative Commons, Attribution-ShareAlike Creative Commons, Attribution-NonCommercial-ShareAlike GNU Free Documentation License (GFDL) Open Publication License Libre Designs General Public License (LDGPL) Design Science License Free Art License Artistic License GNU General Public License (GPL) GNU Lesser General Public License (LGPL) Affero General Public License (AGPL) BSD License Mozilla Public License Public Domain No license Services OP provides several free Internet services to free/libre projects collaborators: Web hosting with subdomain (projectname).ourproject.org or Virtual Hosting Mailing list Web Forums SSH account for full customization MySQL database permanent file archival (FTP) Wiki Web-administration E-mail Alias @users.ourproject.org Periodical full backups For software projects: SCM (CVS, SVN) Other secondary services: Task management News service Documentation management Surveys Registering File publication system Stats Partners After 9 years of existence, Ourproject has developed partnership with several organizations: GRASIA (Group of Software Agents, Engineering & Applications): Research group of Universidad Complutense de Madrid, it is offering joint grants to students and providing hardware resources. American University of Science and Technology (Beirut): offering students the chance of collaborating with Ourproject receiving trainings in administration, or framing their senior project and Master theses in this environment. IEPALA Foundation: It has joined Ourproject, together with other free software initiatives, to build a common Data Center. Xsto.info: Free software cooperative that has been providing technical infrastructure to Ourproject without charge, and recently joined the mentioned collective Data Center. See also Comunes Collective Kune (software) Libre knowledge Free content Copyleft Open content Free software movement Open educational resources Gratis versus Libre Comparison of wiki farms References External links Project hosting websites Creative Commons-licensed websites Collaborative projects Internet properties established in 2002 Social networks for social change Wiki farms Multilingual websites Web service providers Creative Commons Knowledge markets
4212598
https://en.wikipedia.org/wiki/Po%20Leung%20Kuk%20Lo%20Kit%20Sing%20%281983%29%20College
Po Leung Kuk Lo Kit Sing (1983) College
Po Leung Kuk Lo Kit Sing (1983) College () is a Hong Kong secondary school. Located in Cheung Hong Estate, Tsing Yi, New Territories, the subsidised secondary school was founded by Po Leung Kuk, a Hong Kong charitable organisation, in 1984. It was the first secondary school on the island. The school was named as Po Leung Kuk 1983 Board of Directors' College () before 2011. Introduction The school arranges numerous regular extracurricular activities for all students. Annual school events include reading activities, dance festivals, music festivals, drama performances, Chinese cultural week, swimming galas, sports days and summer camp. The school has been a vanguard of project-based learning since 1990s, when the approach was relatively new to many Hong Kong secondary schools. Students, divided into tetrads or pentads, have to finish a project throughout their summer vacation. Also, students need to complete various projects related to each subject they are studying. The school adopted English as the medium of instruction (EMI) in September 2010, owing to its efforts to strengthen the role of English language in the school. For instance, one-third of the school's lessons in an academic year are conducted in English. Its morning assembly is also conducted in English. Name change controversy Background Po Leung Kuk, founder of the school, announced an identified donor, Mr Lo, donated HK$10 million to the school, in January 2010. The school then proposed a change to its name to honor the donor. The school's proposal soon sparked controversy among students and alumni of the school, which later launched a Facebook campaign to ask for withdrawal of the proposal. During the consultation period, Po Leung Kuk held discussions with different stakeholders of the school, claiming the overall response to the name change was positive. Student union of the school, in response to the controversy, held a forum on 25 January 2010 to deliberate the issue. Hui Wing-ho, then-principal of the school, was invited to the forum. Confirmation Po Leung Kuk confirmed the long-speculated name change of the school after Lo Kit-sing, then-vice-chairman of Hong Kong Football Association, donated HK$7.8 million to the school, in June 2011. New name of the school, Po Leung Kuk Lo Kit Sing (1983) College, was unveiled on 14 September 2011 by a representative from Po Leung Kuk. Education Bureau approved the school's application for the name change on 28 November 2011 and informed the school can adopt the new name two days later. A naming ceremony, officiated by Michael Suen Ming-yeung, then-Secretary for Education, was held on 2 December 2011 to celebrate the name change. Teaching overview The development of major projects The school actively promotes the four teaching reform programs of "專閱德訊". Thematic research courses have been introduced for more than a decade. In addition to reading programs, the school conducts annual reading harvest day promotion activities. In addition, teachers and students have fully applied IT in learning and teaching. The Civic and Moral Education Group has a wealth of standing activities such as Voicing our values. Information Technology Teaching The school has more than 300 computers, each classroom are equipped with LCD projector and multimedia player system, and set the whiteboard. Together with all teachers completing the training in information technology, multimedia teaching has basically penetrated into various subjects. It is not uncommon for students to watch videos, presentations and download materials from the internet in class. Extracurricular activities / Co-curricular activities(CCA) Extracurricular activities in the school are responsible for extra-curricular activities. Activities include teams and interest groups, divided into five major categories of academic, arts, interests, sports and services. Expansion of learning space as the main line, emphasizing the learning experience outside the classroom. The school has 5 layers of 59 extra-curricular activities, namely four clubs, school teams, uniform team, academic society and interest groups. In the first and second levels, a one-person-one-team and one-two-team learning policy will be implemented. In the first one, the Music For All Program and the third, fourth and fifth executive training programs will be implemented. School team: badminton team, women's volleyball team, men's volleyball team, women's basketball team, men's basketball team, swimming team, men's soccer team, women's soccer team, dodge team, choir, drama team, modern dance team, Chinese dance team, Visual Arts Team, Wind Orchestra, String Orchestra, Law Pioneers, Performing Arts Team, Creative Thinking Team, Musical Team, INLA Dance Team, Chinese Orchestra, Woodpipe Team, Hand Bell Team Uniformed Teams: Girl Scouts (NT 234), Scout (Tsing Yi 12th Brigade) Academic Society: Chinese Society, English Society, Mathematical Society, Science Society, Society of Social Sciences Interest groups: English Debate Club, Chess Club, Broadcast Club, Christian Fellowship, Defense Science Club, Information Technology Club, English Learning Club of Painting, Geography Club, Home Economics Club, Magic Club, Creative Thinking Club, Model Production Club, Mandarin Club, Ping Pong Club, Handicraft Club, English Theater Club, Floral Society, Electronics Club, Campus TV Production Society, Journalism Training Club, Japanese Culture Club, Aviation Club Student union Each year, the student union's cabinet is made up of schoolchildren, and the chairman / officer is pre-organized. And then by the school students from September to October each year to vote by the election form. If there is more than one student union officer to be elected, the voting form is one-person-one-vote voting for the cabinet to be elected, with the largest number of winners winning. On the contrary, if there is only one group of cabinet members participating in the election, the voting form will be changed to "trust / mistrust" voting and more than half of the "trust" votes must be obtained or there will be no cabinet in that year. School principals Terence Chang Cheuk-cheung (1984-1987) Isaac Tse Pak-hoi (1987-2003) Hui Wing-ho (2003-2014) Lo Wing-chung (2014–present) Notable alumni (ascending order of admission year) Karson Oten Fan Karno: a former English Language tutor in Hong Kong, he established All-star education and all branches winded up in October, 2008. He announced retirement in the same period. Toni Wong Shan: a former journalist and reporter of ATV, TVB and Phoenix Hong Kong Channel. Fong Fu Yeon: a secondary school teacher who published articles online about social issues. Lam Lei Carrie: the second runner-up in the 2005 Miss Hong Kong pageant. Lui Wing Kai, Eric: a Hong Kong poet, author and film critic. Doctor of Philosophy. Mak, George Kam Wah: a linguistic and religious scholar, now working in HKBU as an assistant professor. Sonia Kong: representative of Hong Kong Female Beach Volleyball Team, concurrently as a Now TV Hong Kong beach volleyball commentator and football program host, plane model, is a hospital nurse. Tsang Kin Ho: one of the founders of Tower of Saviors. Anson Lo: a member of Hong Kong boy band Mirror Albert Lee: a platoon squads sergeant of Civil Aid Service See also Po Leung Kuk Education in Hong Kong List of secondary schools in Hong Kong List of schools in Hong Kong References External links Po Leung Kuk Lo Kit Sing (1983) College (in Chinese) Alumni Association Website (in Chinese) Alumni Association Facebook Page (in Chinese) Po Leung Kuk School Profile Lo Kit Sing (1983) College Educational institutions established in 1984 Secondary schools in Hong Kong 1984 establishments in Hong Kong Tsing Yi
69898221
https://en.wikipedia.org/wiki/This%20Is%20the%20Show
This Is the Show
This Is the Show is the upcoming third studio album from multi-instrumentalist and songwriter Josh Klinghoffer, under the pseudonym of Pluralone. The album was recorded in 2021 and is scheduled to be released in March 2022. The album was produced by Dot Hacker member Clint Walsh, who also performs on the album, along with appearances from Eric Gardner (also from Dot Hacker) and Eric Avery from Jane's Addiction. "Claw Your Way Out" was released as the first single on all digital platforms. The album will be released on vinyl (including a 500-copy limited edition clear colour record), CD and digitally. Background In interviews, Klinghoffer said that he composed songs that would be part of his band Dot Hacker's fourth studio album. In 2021 the single "Divination" was released by the band, but the idea of the whole album did not happen. Klinghoffer began to work on the songs with his bandmate Clint Walsh to release as a Pluralone album. The album touches themes from post-World War II tension to anxiety in contemporary times and interpersonal relations. Track listing Personnel Performed by Josh Klinghoffer & Clint Walsh Eric Avery – Bass on tracks 3 and 5 Eric Gardner – Drums on tracks 1, 3, 5 and 7 Vanessa Freebairn-Smith – Cello on 10 Design by Kate Johnston References 2022 albums Josh Klinghoffer albums Upcoming albums
26990506
https://en.wikipedia.org/wiki/Two%20%28Miss%20Kittin%20%26%20The%20Hacker%20album%29
Two (Miss Kittin & The Hacker album)
Two is the second studio album by French electronic music duo Miss Kittin & The Hacker, released on 12 March 2009 by Miss Kittin's label Nobody's Bizzness. Background and development In 2007, Miss Kittin & The Hacker reunited to release the single "Hometown / Dimanche" through Good Life Recordings. The music video for "Hometown" was directed by Régis Brochier of 7th floor Productions. Miss Kittin & The Hacker reunited for a European tour in 2008, but they also toured throughout America and around the world, before they both started recording new songs, some of which were performed whilst touring. Several of the songs that they composed while on tour were included on the album Two. Promotion Miss Kittin & The Hacker toured as part of the Get Loaded in the Park festival in August 2009. Miss Kittin & The Hacker filmed a promotional music video for their cover of "Suspicious Minds", which was directed by Régis Brochier of 7th floor Productions. "Suspicious Minds" was later featured on the downloadable for free mixtape Skull of Dreams by Little Boots. Singles "PPPO" was released as the album's first single on 13 March 2009. "1000 Dreams" was released as the album's second single on 3 April 2009. The song became a minor hit in Belgium, where it reached the top 20 of the Flanders Ultratip chart. The music video was directed by Régis Brochier of 7th floor Productions in January 2009. "Party in My Head" was released as the third and final single on 19 June 2009. XLR8R offered a free download of the Thieves Like Us remix of the single on its website. Critical receptionTwo received generally favorable reviews from music critics. At Metacritic, which assigns a normalised rating out of 100 to reviews from mainstream publications, the album received an average score of 62, based on 7 reviews. David Abravanel from PopMatters opined that "Two is as nakedly honest and lush an album as either party involved has made. [...] the record is a fantastic progression for Miss Kittin & The Hacker, managing to preserve what made them great partners in the first place, while gracefully maturing into something more multifaceted and emotionally open. Tom Naylor, writing for NME, commented that the album "finds them little changed. The Hacker is still a dab hand at dark electro, his rich, chewy tracks bubbling like molasses in a cauldron; Miss Kittin still veers close to self-parody (see 'Ray Ban') and they still sound best – 'Party In My Head', 'Emotional Interlude', a delicious, Europop cover of 'Suspicious Minds' – when they allow light, and vulnerability, to penetrate the gloom." URB remarked that "It seems as though they're as strong as ever. Is the production skeletal? You could say that. There isn't really a reason why that should be an issue though. Less is often more: any electro-head should know this to be true." Resident Advisor Stéphane Girard opined, "In the end, Two isn't a massive step forward, but it isn't a step backward either. 'Suspicious Minds' aside, though, there is an air of self-confidence emanating from the songs collected on here that wasn't present eight years ago." In a review for XLR8R, Rob Geary stated, "Despite the ridiculously blank 'Ray Ban,' enough of Two excites by delving deeper into the subconscious (the crashing, Nord-led aggression of 'Indulgence') or stepping out into the sunlight (a goofy, cheerful cover of 'Suspicious Minds') to justify marking a calendar eight years from now for a Three." Angela Shawn-Chi Lu from Venus Zine stated, "Abstract expressionism has finally met its electro-clash match. At times experimental with instrumental song intros and abstrusely minimalist with their lyrics, French duo Miss Kittin & The Hacker would do Franz Kline proud with their knack for scrumptious, slaptastic tunes ideal for the BDSM scene." Conversely, The Independents Simon Price noted that "the temptation to revisit that blueprint – Kittin, the elegantly bored Euro-siren, meshing with Hacker's chattering techno textures – has eventually proven overwhelming. While Two never hits those heights, it has its club-friendly moments." Moreover, The Guardians Alex Macpherson concluded that "on last year's sleek BatBox, she proved there was life in a shtick that, by rights, should have burned out half a decade ago in the dying embers of electroclash. But even as she pulled it off against the odds, one sensed the line between success and failure was being cut ever finer; and, in hooking up with her original partner in crime, The Hacker, Kittin has fallen on the wrong side of it." David Raposa from Pitchfork described the album "as a supposed remembrance of the heyday of electroclash, it's a nostalgia trip that's best left untaken." Eric Henderson, writing for Slant Magazine, commented that when compared to the duo's First Album, "The words are flatter, the music is more generically attractive, and maybe we're all getting a little too old for this club." Track listing Personnel Credits adapted from the liner notes of Two''. Miss Kittin – production, booklet photos The Hacker – production, booklet photos Pascal Gabriel – additional mixing at Studio Eleven (East London) Benjamin – mastering at Translab (Paris) Pierre-Jean Buisson – artwork, design Charts References 2009 albums Miss Kittin albums The Hacker albums
990677
https://en.wikipedia.org/wiki/SAS%20%28software%29
SAS (software)
SAS (previously "Statistical Analysis System") is a statistical software suite developed by SAS Institute for data management, advanced analytics, multivariate analysis, business intelligence, criminal investigation, and predictive analytics. SAS was developed at North Carolina State University from 1966 until 1976, when SAS Institute was incorporated. SAS was further developed in the 1980s and 1990s with the addition of new statistical procedures, additional components and the introduction of JMP. A point-and-click interface was added in version 9 in 2004. A social media analytics product was added in 2010. Technical overview and terminology SAS is a software suite that can mine, alter, manage and retrieve data from a variety of sources and perform statistical analysis on it. SAS provides a graphical point-and-click user interface for non-technical users and more through the SAS language. SAS programs have DATA steps, which retrieve and manipulate data, and PROC steps, which analyze the data. Each step consists of a series of statements. The DATA step has executable statements that result in the software taking an action, and declarative statements that provide instructions to read a data set or alter the data's appearance. The DATA step has two phases: compilation and execution. In the compilation phase, declarative statements are processed and syntax errors are identified. Afterwards, the execution phase processes each executable statement sequentially. Data sets are organized into tables with rows called "observations" and columns called "variables". Additionally, each piece of data has a descriptor and a value. The PROC step consists of PROC statements that call upon named procedures. Procedures perform analysis and reporting on data sets to produce statistics, analyses, and graphics. There are more than 300 named procedures and each one contains a substantial body of programming and statistical work. PROC statements can also display results, sort data or perform other operations. SAS macros are pieces of code or variables that are coded once and referenced to perform repetitive tasks. SAS data can be published in HTML, PDF, Excel, RTF and other formats using the Output Delivery System, which was first introduced in 2007. The SAS Enterprise Guide is SAS's point-and-click interface. It generates code to manipulate data or perform analysis automatically and does not require SAS programming experience to use. The SAS software suite has more than 200 components Some of the SAS components include: History Origins The development of SAS began in 1966 after North Carolina State University re-hired Anthony Barr to program his analysis of variance and regression software so that it would run on IBM System/360 computers. The project was funded by the National Institutes of Health. and was originally intended to analyze agricultural data to improve crop yields. Barr was joined by student James Goodnight, who developed the software's statistical routines, and the two became project leaders. In 1968, Barr and Goodnight integrated new multiple regression and analysis of variance routines. In 1972, after issuing the first release of SAS, the project lost its funding. According to Goodnight, this was because NIH only wanted to fund projects with medical applications. Goodnight continued teaching at the university for a salary of $1 and access to mainframe computers for use with the project, until it was funded by the University Statisticians of the Southern Experiment Stations the following year. John Sall joined the project in 1973 and contributed to the software's econometrics, time series, and matrix algebra. Another early participant, Caroll G. Perkins, contributed to SAS' early programming. Jolayne W. Service and Jane T. Helwig created SAS' first documentation. The first versions of SAS were named after the year in which they were released. In 1971, SAS 71 was published as a limited release. It was used only on IBM mainframes and had the main elements of SAS programming, such as the DATA step and the most common procedures in the PROC step. The following year a full version was released as SAS 72, which introduced the MERGE statement and added features for handling missing data or combining data sets. In 1976, Barr, Goodnight, Sall, and Helwig removed the project from North Carolina State and incorporated it into SAS Institute, Inc. Development SAS was re-designed in SAS 76 with an open architecture that allowed for compilers and procedures. The INPUT and INFILE statements were improved so they could read most data formats used by IBM mainframes. Generating reports was also added through the PUT and FILE statements. The ability to analyze general linear models was also added as was the FORMAT procedure, which allowed developers to customize the appearance of data. In 1979, SAS 79 added support for the CMS operating system and introduced the DATASETS procedure. Three years later, SAS 82 introduced an early macro language and the APPEND procedure. SAS version 4 had limited features, but made SAS more accessible. Version 5 introduced a complete macro language, array subscripts, and a full-screen interactive user interface called Display Manager. In 1985, SAS was rewritten in the C programming language. This allowed for the SAS' Multivendor Architecture that allows the software to run on UNIX, MS-DOS, and Windows. It was previously written in PL/I, Fortran, and assembly language. In the 1980s and 1990s, SAS released a number of components to complement Base SAS. SAS/GRAPH, which produces graphics, was released in 1980, as well as the SAS/ETS component, which supports econometric and time series analysis. A component intended for pharmaceutical users, SAS/PH-Clinical, was released in the 1990s. The Food and Drug Administration standardized on SAS/PH-Clinical for new drug applications in 2002. Vertical products like SAS Financial Management and SAS Human Capital Management (then called CFO Vision and HR Vision respectively) were also introduced. JMP was developed by SAS co-founder John Sall and a team of developers to take advantage of the graphical user interface introduced in the 1984 Apple Macintosh and shipped for the first time in 1989. Updated versions of JMP were released continuously after 2002 with the most recent release being from 2016. SAS version 6 was used throughout the 1990s and was available on a wider range of operating systems, including Macintosh, OS/2, Silicon Graphics, and PRIMOS. SAS introduced new features through dot-releases. From 6.06 to 6.09, a user interface based on the windows paradigm was introduced and support for SQL was added. Version 7 introduced the Output Delivery System (ODS) and an improved text editor. ODS was improved upon in successive releases. For example, more output options were added in version 8. The number of operating systems that were supported was reduced to UNIX, Windows and z/OS, and Linux was added. SAS version 8 and SAS Enterprise Miner were released in 1999. Recent history In 2002, the Text Miner software was introduced. Text Miner analyzes text data like emails for patterns in Business Intelligence applications. In 2004, SAS Version 9.0 was released, which was dubbed "Project Mercury" and was designed to make SAS accessible to a broader range of business users. Version 9.0 added custom user interfaces based on the user's role and established the point-and-click user interface of SAS Enterprise Guide as the software's primary graphical user interface (GUI). The Customer Relationship Management (CRM) features were improved in 2004 with SAS Interaction Management. In 2008 SAS announced Project Unity, designed to integrate data quality, data integration and master data management. SAS Institute Inc v World Programming Ltd was a lawsuit with developers of a competing implementation, World Programming System, alleging that they had infringed SAS's copyright in part by implementing the same functionality. This case was referred from the United Kingdom's High Court of Justice to the European Court of Justice on 11 August 2010. In May 2012, the European Court of Justice ruled in favor of World Programming, finding that "the functionality of a computer program and the programming language cannot be protected by copyright." A free version was introduced for students in 2010. SAS Social Media Analytics, a tool for social media monitoring, engagement and sentiment analysis, was also released that year. SAS Rapid Predictive Modeler (RPM), which creates basic analytical models using Microsoft Excel, was introduced that same year. JMP 9 in 2010 added a new interface for using the R programming language from JMP and an add-in for Excel. The following year, a High Performance Computing appliance was made available in a partnership with Teradata and EMC Greenplum. In 2011, the company released Enterprise Miner 7.1. The company introduced 27 data management products from October 2013 to October 2014 and updates to 160 others. At the 2015 SAS Global Forum, it announced several new products that were specialized for different industries, as well as new training software. Releases date SAS had many releases since 1972. Since release 9.3, SAS/STAT has its own release numbering. Software products As of 2011 SAS's largest set of products is its line for customer intelligence. Numerous SAS modules for web, social media and marketing analytics may be used to profile customers and prospects, predict their behaviors and manage and optimize communications. SAS also provides the SAS Fraud Framework. The framework's primary functionality is to monitor transactions across different applications, networks and partners and use analytics to identify anomalies that are indicative of fraud. SAS Enterprise GRC (Governance, Risk and Compliance) provides risk modeling, scenario analysis and other functions in order to manage and visualize risk, compliance and corporate policies. There is also a SAS Enterprise Risk Management product-set designed primarily for banks and financial services organizations. SAS' products for monitoring and managing the operations of IT systems are collectively referred to as SAS IT Management Solutions. SAS collects data from various IT assets on performance and utilization, then creates reports and analyses. SAS' Performance Management products consolidate and provide graphical displays for key performance indicators (KPIs) at the employee, department and organizational level. The SAS Supply Chain Intelligence product suite is offered for supply chain needs, such as forecasting product demand, managing distribution and inventory and optimizing pricing. There is also a "SAS for Sustainability Management" set of software to forecast environmental, social and economic effects and identify causal relationships between operations and an impact on the environment or ecosystem. SAS has product sets for specific industries, such as government, retail, telecommunications and aerospace and for marketing optimization or high-performance computing. Free University Edition SAS also offers Free University Edition which can be downloaded by anyone for non commercial use. The first announcement regarding this Free University Edition seems to have appeared in newspapers on 28 May 2014. Comparison to other products In a 2005 article for the Journal of Marriage and Family comparing statistical packages from SAS and its competitors Stata and SPSS, Alan C. Acock wrote that SAS programs provide "extraordinary range of data analysis and data management tasks," but were difficult to use and learn. SPSS and Stata, meanwhile, were both easier to learn (with better documentation) but had less capable analytic abilities, though these could be expanded with paid (in SPSS) or free (in Stata) add-ons. Acock concluded that SAS was best for power users, while occasional users would benefit most from SPSS and Stata. A comparison by the University of California, Los Angeles, gave similar results. Competitors such as Revolution Analytics and Alpine Data Labs advertise their products as considerably cheaper than SAS'. In a 2011 comparison, Doug Henschen of InformationWeek found that start-up fees for the three are similar, though he admitted that the starting fees were not necessarily the best basis for comparison. SAS' business model is not weighted as heavily on initial fees for its programs, instead focusing on revenue from annual subscription fees. Adoption According to IDC, SAS is the largest market-share holder in "advanced analytics" with 35.4 percent of the market as of 2013. It is the fifth largest market-share holder for business intelligence (BI) software with a 6.9% share and the largest independent vendor. It competes in the BI market against conglomerates, such as SAP BusinessObjects, IBM Cognos, SPSS Modeler, Oracle Hyperion, and Microsoft Power BI. SAS has been named in the Gartner Leader's Quadrant for Data Integration Tool and for Business Intelligence and Analytical Platforms. A study published in 2011 in BMC Health Services Research found that SAS was used in 42.6 percent of data analyses in health service research, based on a sample of 1,139 articles drawn from three journals. See also Comparison of numerical-analysis software Comparison of OLAP servers JMP (statistical software), also from SAS Institute Inc. SAS language References Further reading Wikiversity:Data Analysis using the SAS Language External links Free Statistical Software, SAS University Edition A Glossary of SAS terminology SAS for Developers The SAS customer community Wiki Fourth-generation programming languages Articles with example code Business intelligence C (programming language) software Criminal investigation Data mining and machine learning software Data warehousing Extract, transform, load tools Mathematical optimization software Numerical software Proprietary commercial software for Linux Proprietary cross-platform software Science software for Linux
36025084
https://en.wikipedia.org/wiki/MacKeeper
MacKeeper
MacKeeper is utility software distributed by Clario Tech Limited. MacKeepers was developed by ZeoBIT and is presently distributed by Clario Tech. The first beta version was released on 13 May 2010. It is designed to operate on computers running macOS. Versions 1.0 and 2.0 received mixed reviews, while the 3.0 version released in 2015 was reviewed mostly negatively and according to MacWorld was difficult to uninstall. The latest 5.0 version was released in 2020 after Clario Tech acquired Kromtech’s MacKeeper in 2019 and relaunched the software. History MacKeeper was initially developed in 2009 by Ukrainian programmers in Zeobit. The first beta version, MacKeeper 0.8, was released on 13 May 2010. MacKeeper 1.0 was released on October 26, 2010. MacKeeper 2.0 was released on 30 January 2012 at Macworld – iWorld with an expanded number of utilities related to security, data control, cleaning and optimization. Kromtech Alliance acquired MacKeeper from Zeobit in April 2013. MacKeeper 3.0 was released in June 2014 as software as a service with a new "human expert" feature and optimization with OS X Yosemite. In July 2018, MacKeeper 4.0 was released. In April 2013, Zeobit sold MacKeeper to Kromtech Alliance Corp. Kromtech was closely affiliated with Zeobit in Ukraine and hired many former Kyiv-based Zeobit employees. In December 2015 security researcher Chris Vickery discovered a publicly accessible database of 21GB of MacKeeper user data on the internet, exposing the usernames, passwords and other information of over 13 million MacKeeper users. According to Kromtech, this was the result of a "server misconfiguration" and the error was "fixed within hours of the discovery". In 2019 Clario acquired the IP and human capital of Kromtech, including all of Mackeeper. , five major versions of MacKeeper had been released. Features It integrates Avira's anti-malware scanning engine, but some versions opened a critical security hole. The filesystem-level encryption tool can encrypt files or folders with a password. The data recovery utility permits users to recover unintentionally deleted files. A backup software is also included, which can copy files to a USB flash drive, External HDD or FTP server. The data erasure permits users to permanently delete files although PC World argues that this feature duplicates the secure empty trash feature formerly built into macOS. The disk cleaner finds and removes junk files on the hard drive in order to free up space. Reception Version 1.0 and 2.0 The earlier bundles received mixed reviews, with reviewers being divided as to the effectiveness of the software. Macworld gave MacKeeper 3.5 out of 5 stars in August 2010, based on the 0.9.6 build of the program, and found it a reasonably priced set of tools but experienced lagging while switching between tools. MacLife rated it at 2.5 out of 5 and said it to be useful mainly for freeing up drive space, but found other features offered inconsistent results and believed most users won’t need its antivirus feature. AV-Comparatives found that MacKeeper had an excellent ability to detect Mac-based malware. They noted that it was "very well suited to enthusiasts who have a good understanding of security issues, but not ideal for non-expert users who need pre-configured optimal security for their Macs." Zeobit claims that negative attacks were launched against MacKeeper by an unnamed competitor, and that many users and press were confusing MacKeeper with another application. Version 3.0 Reviews of the latest software version have been largely negative. A May 2015 test by PC World found that MacKeeper identified the need for extensive corrections on brand new fully patched machines. In December 2015, Business Insider and iMore suggested users avoid the product and not install it. Top Ten Reviews has removed MacKeeper from its top 10 ranking noting that the software had more features than its competitors but its performance in Mac malware identification tests showed other software had better detection rates, resulting in a score of 7.5 out of 10. A July 2017 AV-TEST assessment found MacKeeper only detected 85.9 percent of the tested malware. MacKeeper has been criticized for being very difficult to uninstall; according to MacWorld, people frequently ask how they can get rid of MacKeeper. Both Tom's Guide and MacWorld have published how-to guides for deleting the software. MacWorld observed that aggressive MacKeeper advertising leads people to believe that the software is either malware or a scam, when it is neither; MacWorld also notes that some pop-up and pop-under ads may be due to third-party installers. Computerworld described MacKeeper as "a virulent piece of software that promises to cure all your Mac woes, but instead just makes things much worse". Version 4.0 MacKeeper 4.0 was released in July 2018. It included such improvements as anti-tracking, macOS VPN and a combination of security, privacy, and anti-fraud technologies. Version 5.0 MacKeeper 5.0., released in November 2020, was notarized by Apple and AV-tested. According to Macobserver the new version with security certifications is allowed a low-level system privilege on Mac hardware. TechRadar rated Mackeeper 5.0. as a product with "a decent level of protection and performance optimization features". MacKeeper 5.0. has the following major tools: Find & Fix - one-click scan to review Mac status Antivirus with real-time protection ID Theft Guard which detects data breaches and password leaks StopAd to block ads at Chrome and Safari Safe Cleanup to detect and remove unneeded attachments and files Duplicates Finder to spot duplicate files and sort similar photos or screenshots VPN Marketing techniques Multiple reviewers have criticized Zeobit's marketing and promotional techniques. Kromtech buys upwards of 60 million ad impressions a month, making it one of the largest buyers of web traffic aimed at Mac users. Zeobit has been accused of employing misleading advertising with regard to its promotion of MacKeeper, including aggressive affiliate marketing, pop-under ads and planting sockpuppet reviews as well as websites set up to discredit their competitors. Kromtech has also had issues with affiliate advertisers, attracted by the 50 percent commissions Kromtech pays for sales of MacKeeper, who have wrapped MacKeeper ads into adware. In 2018, Kromtech began to take steps against affiliate marketers it said were scamming users. The company recovered after changing the management. Lawsuits In January 2014, a class action lawsuit was filed against Zeobit in Illinois. The lawsuit alleged that "neither the free trial nor the full registered versions of MacKeeper performed any credible diagnostic testing" and reported that a consumer's Mac was in need of repair and was at-risk due to harmful error. In May 2014 a lawsuit was filed against Zeobit in Pennsylvania, alleging that MacKeeper fakes security problems to deceive victims into paying for unneeded fixes. On 10 August 2015, Zeobit settled a class action lawsuit against it for . Customers who bought MacKeeper before 8 July 2015 can apply to get a refund. Kromtech also filed at least two unsuccessful lawsuits against those it perceives are defaming them. In July 2013 Kromtech filed a lawsuit against Macpaw, the developers of CleanMyMac. Kromtech alleged that Macpaw employees created several usernames and posts on several websites defaming the MacKeeper software. The case was dismissed before the hearing. A year later, in 2014, Kromtech again filed a lawsuit against David A. Cox alleging that he defamed Kromtech by calling MacKeeper a fraudulent application in a YouTube video. The judge dismissed the case for lack of personal jurisdiction. In July 2016, Kromtech sent a cease and desist letter to Luqman Wadood, a 14-year old technology reviewer for alleged harassment and slander of the MacKeeper brand in a number of YouTube videos. Luqman said the videos were diplomatic. See also Comparison of antivirus software Comparison of firewalls Internet Security References Antivirus software Proprietary software MacOS-only software Utilities for macOS MacOS security software
1248954
https://en.wikipedia.org/wiki/Rexel
Rexel
Rexel is a French company specializing in the distribution of electrical, heating, lighting and plumbing equipment, but also in renewable energies and energy efficiency products and services. Founded in 1967, Rexel has broadened its scope of activity over the years. Today, its offer combines a wide range of equipment and services in the field of automation, technical expertise, energy management, lighting, security, climatic engineering, communication, home automation and renewable energies. History The Rexel Group is the descendant of Compagnie de Distribution de Matériel Électrique (CDME), which was created in 1967 by Compagnie Lebon. CDME was the result of the merger of four companies: Revimex, Facen, Sotel and Lienard-Soval. It specialized in electrical equipment sales and grew in France through the acquisition of regional family businesses. In 1978, the company launched a professional electronic and computer equipment distribution branch and diversified into the industrial supplies business. Beginning in the 1980s, CDME began its expansion into European and international markets by setting up in Cyprus, Saudi Arabia, Portugal, Benelux, West Germany, Singapore and Canada. In 1983, CDME joined the unlisted securities market of the Paris Stock Exchange. The company then held more than 20% of the market share in France and 6% worldwide. In 1986, it began doing business in the United States. At that time, it had 350 points of sale, 65 of which were abroad. In 1987, Compagnie Française de l'Afrique Occidentale (CFAO) became the main shareholder of CDME (68%). Rexel Group The Pinault Group acquired CDME in December 1990 and became its main shareholder. The electrical equipment distribution subsidiary accounted for 38% of the Pinault Group's total sales. CDME was then the leading distributor of electrical equipment in France (30% market share), Belgium and Portugal. In June 1993, CDME joined forces with Groupelec Distribution. That is when the group took on the name Rexel. In the 1990s, the group continued to strengthen its business in France, Europe and the United States. The company re-centered its operations on the distribution of electrical equipment. Indeed, in 1988 it had sold most of its professional electronics distribution interests and in 1994 it separated from its subsidiary GDFI, France's largest distributor of industrial supplies. The group's subsidiaries gradually adopted the Rexel brand identity abroad. Willcox and Gibbs in the United States became Rexel Inc. in 1995; Rexel Italia was born in 2000 from the merger of 10 Italian subsidiaries; and the first joint venture was formed in China under the name Rexel Hailongxing. Evolution of the group The PPR (Pinault-Printemps-Redoute) Group announced the sale of Rexel, which was finalized on March 16, 2005, with the Ray Investment consortium acquiring shares in the company. The consortium members included Clayton, Dubilier & Rice, Eurazeo and Merrill Lynch Global Private Equity,. Rexel was de-listed from the Paris Stock Exchange on April 25, 2005. The Rexel Group continued refocusing on its core business. It disposed of certain assets while embarking on a series of acquisitions. In addition to the 29 small and medium-sized acquisitions it made over the period, in 2006 the group bought the American distribution subsidiary of General Electric, GE Supply, which it renamed Gexpro. That moved Rexel into the number one position in North America and Asia-Pacific. Gexpro reinforced the group's provision of services to major industrial accounts. In 2007, Rexel teamed up with the Sonepar Group to make a joint offer to purchase the Hagemeyer Group which was then number 3 worldwide. In 2007, Rexel changed its legal status to become a French public limited company (société anonyme) with a management board and supervisory board. The company was listed on the Paris stock exchange on April 4, 2007 on the Euronext market. In March 2008, Rexel acquired the majority of Hagemeyer's European assets (Elektroskandia in Scandinavia, ABM in Spain and Newey & Eyre in the United Kingdom). Following that acquisition, Rexel doubled its total sales in Europe, increased the number of points of sale by 50%, and entered the markets of five new countries (Finland, Norway and the Baltic States). Rexel was the largest or second-largest market player in Germany, Spain, United Kingdom, Norway, Sweden and Finland. In 2008, Rexel held around 9% of the world market for the distribution of professional electrical equipment. With 2,500 points of sale in 34 countries, the group was a leading player in this sector. In 2011, the company began seeking acquisitions in emerging markets. As a result, Rexel strengthened its position in China with the acquisition of Lucky Well Zhineng and notably penetrated markets in Brazil (Nortel Suprimentos Industriais), India (Yantra Automation) and Peru (where it bought V&F Technologia, a Lima-based distributor of electrical equipment distributor). In early 2012, the group acquired the companies Delamano and Etil, becoming a leader in the Brazilian market. It also continued expanding in the United States by acquiring the independent electrical equipment distributor Platt Electric Supply and Munro Distributing Company for 115 million Euros, which bolstered its presence in the American energy efficiency market across the pond. The year 2012 also marked the start of Rudy Provoost's tenure as chairman of the management board. He succeeded Jean-Charles Pauze. The company launched the Energy in Motion project in 2012 which put customers at the heart of its strategy. In June 2013, the group created the Rexel Foundation to harness its know-how and expertise to fight energy poverty. In 2014, the group announced it was purchasing Esabora, a software company that publishes sales and administrative management tools for electricians. In 2015, the Group strengthened its position in the multi-energy sector in France with the acquisition of Sofinther, a distributor specializing in thermal, heating and control equipment. On April 30, 2015, Rexel announced the sale of its businesses in Latin America. In January 2016, Rexel announced it was selling its interests in Poland, Slovakia and the Baltic countries to Würth. In January 2016, the group sold its interests in Poland, Slovakia and the Baltic States to the Würth Group. In February 2016, as part of its acquisition policy, Rexel purchased Brohl & Appell, an American company specialized in industrial automation and MRO (Maintenance, Repair and Operations) services. In July 2016, Rexel changed its governance structure and separated the functions of Chairman of the Board of Directors and Chief Executive Officer. Patrick Berard was appointed Chief Executive Officer and Ian Meakins was named Chairman of the Board of Directors. In February 2017, Rexel announced a new strategic plan intended to refocus its activity on the most promising countries and market segments in order to accelerate the group's organic growth, by promoting the digital transformation of electrical equipment and hardware, including connected objects. In December 2017, the group announced it would sell its activities in South-East Asia to American Industrial Acquisition Corporation Group. In 2020, the group parted ways with its Gexpro Services business to devote its resources to the crux of its strategy: the distribution of electrical equipment in the United States and the digital transformation of that equipment. The group reasserted its commitment to evolve toward a data-driven business model. In February 2021, the group acquired the Canadian operations of Wesco International and in March 2021 announced the acquisition of Freshmile Services, an independent charging station operator that offers monitoring services and software. The group also gained a 25% minority stake in Trace Software International, a software company specializing in electrical design. These two moves were intended to build the group's electromobility capacity. In November, 2021, Rexel closed on the acquisition of Mayer, a major distributor of electrical products and services operating in the Eastern part of the United States. This followed their announcement in February to resume resume bolt-on acquisitions to strengthen positioning in the United States. The Mayer business added 68 points of sale for Rexel generating $1.2B in annual sales as of August, 2021. Activities Rexel supplies equipment for the installation and use of electricity, mainly in residential, tertiary and industrial buildings. The group is also positioned on industrial infrastructure projects (mines, hydroelectric power stations, natural gas). The products marketed by Rexel relate in particular to medium voltage electrical circuits in buildings: secure meters (general circuit breaker), current carriers (wires and cables), attachments and protections (baseboards, cable trays), as well as equipment that protects circuits (auxiliary circuit breakers, kill switches) and that connects and controls devices (sockets, contactors, switches). In addition, the group offers equipment for low voltage circuits (internet, stereos, television, telephone) and technical building management (home automation and building automation) with its own multi-protocol and multi-brand smart home solution. The company also markets electrical appliances: lighting equipment, heating, ventilation, air conditioning, motors, telephones and audiovisual equipment and so on. In recent years, Rexel has developed low consumption solutions (lighting, high-efficiency motors) and products that use renewable energies: solar, wind, geothermal and aerothermal (heat pumps, solar water heaters, climate engineering solutions). Services The services marketed by the Rexel Group can be divided into 7 categories: Infrastructure and network services in which there is an infrastructure and network educational offer, as well as an infrastructure and network maintenance after-sales service Industrial process services, integrating an industrial process educational offer, an industrial process maintenance after-sales service and industrial process technical assistance Heating, electricity, air conditioning and ventilation services Lighting services Building outfitting and building control services including an educational outfitting and building control offer, application and building control training, and application and building control commissioning. Energy distribution and management services comprising an educational energy distribution and management offer, technical assistance, energy distribution and management training, and distribution training. Audit, pre-sales consulting and recycling and waste collection services. Software In 2014, the group bought Esabora, originally an e-commerce software publisher, which has become the name of a customer-oriented digital solution offering administrative, commercial and technical assistance. This Cloud-based software suite allows companies to access studies (3D blueprints, BIM), site management, personalized pricing and online purchase monitoring. In 2021, the group launched a new application, BtoB EConnect Pro, which enables remote maintenance and diagnostics at connected sites. The group is positioned on the smart grid and smart building market. Data/AI The company launched a single web and data platform for all its brands, offers and customers in the United States, a region whose sales account for more than 30% of total revenue. In September 2020, the group and four other major French companies made a joint commitment to "Hi! Paris", an interdisciplinary research and teaching center created by HEC and Institut Polytechnique de Paris dedicated to artificial intelligence and data sciences. Distribution The Rexel Group's sales at December 31, 2020, totaled 12.59 billion Euros. The geographical breakdown of its revenue is as follows: Europe (56%), North America (35%), and Asia Pacific (9%). The sales breakdown by segment is as follows: 43% for the tertiary market, 29% for the industrial market and 28% for the residential sector. The group's net income from operations amounted to 277.7 million Euros in 2020. References External links Business services companies established in 1967 Distribution companies of France French brands Lighting brands Private equity portfolio companies French companies established in 1967 Companies based in Paris Companies listed on Euronext Paris
1252517
https://en.wikipedia.org/wiki/File%20deletion
File deletion
File deletion is the removal of a file from a computer's file system. All operating systems include commands for deleting files (rm on Unix, era in CP/M and DR-DOS, del/erase in MS-DOS/PC DOS, DR-DOS, Microsoft Windows etc.). File managers also provide a convenient way of deleting files. Files may be deleted one-by-one, or a whole blacklist directory tree may be deleted. Purpose Examples of reasons for deleting files are: Freeing the disk space Removing duplicate or unnecessary data to avoid confusion Making sensitive information unavailable to others Removing an operating system or blanking a hard drive Accidental removal A common problem with deleting files is the accidental removal of information that later proves to be important. A common method to prevent this is to back up files regularly. Erroneously deleted files may then be found in archives. Another technique often used is not to delete files instantly, but to move them to a temporary directory whose contents can then be deleted at will. This is how the "recycle bin" or "trash can" works. Microsoft Windows and Apple's macOS, as well as some Linux distributions, all employ this strategy. In MS-DOS, one can use the undelete command. In MS-DOS the "deleted" files are not really deleted, but only marked as deleted—so they could be undeleted during some time, until the disk blocks they used are eventually taken up by other files. This is how data recovery programs work, by scanning for files that have been marked as deleted. As the space is freed up per byte, rather than per file, this can sometimes cause data to be recovered incompletely. Defragging a drive may prevent undeletion, as the blocks used by deleted file might be overwritten since they are marked as "empty". Another precautionary measure is to mark important files as read-only. Many operating systems will warn the user trying to delete such files. Where file system permissions exist, users who lack the necessary permissions are only able to delete their own files, preventing the erasure of other people's work or critical system files. Under Unix-like operating systems, in order to delete a file, one must usually have write permission to the parent directory of that file. Sensitive data The common problem with sensitive data is that deleted files are not really erased and so may be recovered by interested parties. Most file systems only remove the link to data (see undelete, above). But even overwriting parts of the disk with something else or formatting it may not guarantee that the sensitive data is completely unrecoverable. Special software is available that overwrites data, and modern (post-2001) ATA drives include a secure erase command in firmware. However, high-security applications and high-security enterprises can sometimes require that a disk drive be physically destroyed to ensure data is not recoverable, as microscopic changes in head alignment and other effects can mean even such measures are not guaranteed. When the data is encrypted only the encryption key has to be unavailable. Crypto-shredding is the practice of 'deleting' data by (only) deleting or overwriting the encryption keys. See also Crypto-shredding Data erasure
19944241
https://en.wikipedia.org/wiki/Peter%20Daland
Peter Daland
Peter Daland (April 12, 1921 – October 20, 2014) was a swimming coach from the United States. He was born in New York City. His coaching career spanned over 40 years. Daland attended Harvard University before enlisting in the United States Army for World War II. After the war, he graduated from Swarthmore College in 1948 and got his first coaching job at Rose Valley, Pennsylvania, where he won 8 straight Suburban League titles (1947–55). He founded and was first coach of the Suburban Swim Club in Newtown Square, Pennsylvania and served as an assistant to Bob Kiphuth at Yale University before deciding to take Horace Greeley's advice and head west in 1956 as coach at the University of Southern California in Los Angeles and the Los Angeles Athletic Club. In 1958, he returned to Yale with 5 USC Freshmen and won the National AAU Team Title from the New Haven Swim Club. For 35 years (1957–1992), Daland was the swimming coach for the USC Trojans, where he led the Trojans to 9 NCAA Championships. He also led teams to 14 AAU Men's National titles, and 2 AAU Women's National titles. He is the only coach to have won all three major national team championships -- 8 NCAA, 14 National AAU Men's, and 2 National AAU Women's (LAAC). Specializing in family dynasties, Daland had the good fortune of championships wins from the brothers Devine, Bottoms, Furniss, Orr, and the House brother and sister act. His Trojan teams won more than 160 dual meets with more than 100 individual titles. As of 1974, Daland's record boasted 183 individual national champions. Daland also coached the U.S. women's swim team at the 1964 Olympics, where his swimmers won 15 of the 24 medals awarded in women's swim events. He then coached the US men's team at the 1972 Olympics, where his men swimmers won 26 of 45 medals awarded in men's events. In those Olympics, Mark Spitz of the United States had a spectacular run, lining up for seven events, winning seven Olympic titles and setting seven world records. Daland was also active in the swimming community via his roles/positions with FISU, the International University Sports Federation, and ASCA, the American Swimming Coaches Association. He was one of the founders of ASCA, and was inducted into the International Swimming Hall of Fame in 1977. The pool of USC's Uytengsu Aquatics Center bears his name. Daland was married to former German top-class swimmer Ingrid Feuerstack. They had three children, Peter Jr., Bonnie, and Leslie. Leslie now owns Daland Swim School, which was founded by Ingrid, in Thousand Oaks, California. Leslie's daughter, Reyna, is currently a standout high school swimmer and is employed at the Daland Swim School. On October 20, 2014, he died in Thousand Oaks, California at the age of 93 of Alzheimer's disease. Honors and awards 1962 ASCA Coach of the Year 1964 Olympics Women's Swimming Team Coach for the USA 1972 Olympics Men's Swimming Head Coach for the USA 1977 AAU Swimming Award recipient See also List of members of the International Swimming Hall of Fame References External links Daland's bio from the ASCA Hall of Fame. American swimming coaches USC Trojans swimming coaches 2014 deaths 1921 births Sportspeople from New York City Harvard University alumni United States Army personnel of World War II Yale Bulldogs swimming coaches Swarthmore College alumni Deaths from Alzheimer's disease Neurological disease deaths in California
29482098
https://en.wikipedia.org/wiki/Comparison%20of%20IPv6%20support%20in%20operating%20systems
Comparison of IPv6 support in operating systems
This is a comparison of operating systems in regard to their support of the IPv6 protocol. Notes Operating systems that support neither DHCPv6 nor SLAAC cannot automatically configure unicast IPv6 addresses. Operating systems that support neither DHCPv6 nor ND RDNSS cannot automatically configure name servers in an IPv6-only environment. References External links ISOC IPv6 FAQ with OS tips IPv6 Computing comparisons IPv6 support
5956143
https://en.wikipedia.org/wiki/Portal%20%28video%20game%29
Portal (video game)
Portal is a 2007 puzzle-platform game developed and published by Valve. It was released in a bundle, The Orange Box, for Windows, Xbox 360 and PlayStation 3, and has been since ported to other systems, including Mac OS X, Linux, Android (via Nvidia Shield), and Nintendo Switch. Portal consists primarily of a series of puzzles that must be solved by teleporting the player's character and simple objects using "the Aperture Science Handheld Portal Device", a device that can create inter-spatial portals between two flat planes. The player-character, Chell, is challenged and taunted by an artificial intelligence named GLaDOS (Genetic Lifeform and Disk Operating System) to complete each puzzle in the Aperture Science Enrichment Center using the portal gun with the promise of receiving cake when all the puzzles are completed. The game's unique physics allows kinetic energy to be retained through portals, requiring creative use of portals to maneuver through the test chambers. This gameplay element is based on a similar concept from the game Narbacular Drop; many of the team members from the DigiPen Institute of Technology who worked on Narbacular Drop were hired by Valve for the creation of Portal, making it a spiritual successor to the game. Portal was acclaimed as one of the most original games of 2007, despite criticisms for its short duration and limited story. It received praise for its originality, unique gameplay and dark story with a humorous series of dialogue. GLaDOS, voiced by Ellen McLain in the English-language version, received acclaim for her unique characterization, and the end credits song "Still Alive", written by Jonathan Coulton for the game, was praised for its original composition and humorous twist. Portal is often cited as one of the greatest video games ever made. Excluding Steam download sales, over four million copies of the game have been sold since its release, spawning official merchandise from Valve including plush Companion Cubes, as well as fan recreations of the cake and portal gun. A standalone version with extra puzzles, Portal: Still Alive, was published by Microsoft Game Studios on the Xbox Live Arcade service in October 2008 exclusively for Xbox 360. Portal 2, released in 2011, expanded on the storyline, adding several gameplay mechanics and a cooperative multiplayer mode. Gameplay In Portal, the player controls the protagonist, Chell, from a first-person perspective as she is challenged to navigate through a series of test chambers using the Aperture Science Handheld Portal Device, or portal gun, under the watchful supervision of the artificial intelligence GLaDOS. The portal gun can create two distinct portal ends, orange and blue. The portals create a visual and physical connection between two different locations in three-dimensional space. Neither end is specifically an entrance or exit; all objects that travel through one portal will exit through the other. An important aspect of the game's physics is momentum redirection and conservation. As moving objects pass through portals, they come through the exit portal at the same direction that the exit portal is facing and with the same speed with which they passed through the entrance portal. For example, a common maneuver is to place a portal some distance below the player on the floor, jump down through it, gaining speed in freefall, and emerge through the other portal on a wall, flying over a gap or another obstacle. This process of gaining speed and then redirecting that speed towards another area of a puzzle allows the player to launch objects or Chell over great distances, both vertically and horizontally, referred to as 'flinging' by Valve. As GLaDOS puts it, "In layman's terms: speedy thing goes in, speedy thing comes out." If portal ends are not on parallel planes, the character passing through is reoriented to be upright with respect to gravity after leaving a portal end. Chell and all other objects in the game that can fit into the portal ends will pass through the portal. However, a portal shot cannot pass through an open portal; it will simply deactivate or create a new portal in an offset position. Creating a portal end instantly deactivates an existing portal end of the same color. Moving objects, glass, special wall surfaces, liquids, or areas that are too small will not be able to anchor portals. Chell is sometimes provided with cubes that she can pick up and use to climb on or to hold down large buttons that open doors or activate mechanisms. Particle fields, known as "Emancipation Grills", occasionally called "Fizzlers" in the developer commentary, exist at the end of all and within some test chambers; when passed through, they will deactivate any active portals and disintegrate any object carried through. These fields also block attempts to fire portals through them. Although Chell is equipped with mechanized heel springs to prevent damage from falling, she can be killed by various other hazards in the test chambers, such as turret guns, bouncing balls of energy, and toxic liquid. She can also be killed by objects hitting her at high speeds, and by a series of crushers that appear in certain levels. Unlike most action games at the time, there is no health indicator; Chell dies if she is dealt a certain amount of damage in a short period, but returns to full health fairly quickly. Some obstacles, such as the energy balls and crushing pistons, deal fatal damage with a single blow. GameSpot noted, in its initial review of Portal, that many solutions exist for completing each puzzle, and that the gameplay "gets even crazier, and the diagrams shown in the trailer showed some incredibly crazy things that you can attempt". Two additional modes are unlocked upon the completion of the game that challenge the player to work out alternative methods of solving each test chamber. Challenge maps are unlocked near the halfway point and Advanced Chambers are unlocked when the game is completed. In Challenge mode, levels are revisited with the added goal of completing the test chamber either with as little time, with the fewest portals, or with the fewest footsteps possible. In Advanced mode, certain levels are made more complex with the addition of more obstacles and hazards. Synopsis Characters The game features two characters: the player-controlled silent protagonist named Chell, and GLaDOS (Genetic Lifeform and Disk Operating System), a computer artificial intelligence that monitors and directs the player. In the English-language version, GLaDOS is voiced by Ellen McLain, though her voice has been altered to sound more artificial. The only background information presented about Chell is given by GLaDOS; the credibility of these facts, such as Chell being adopted, an orphan, and having no friends, is questionable at best, as GLaDOS is a liar by her own admission. In the "Lab Rat" comic created by Valve to bridge the gap between Portal and Portal 2, Chell's records reveal she was ultimately rejected as a test subject for having "too much tenacity"—the main reason Doug Rattman, a former employee of Aperture Science, moved Chell to the top of the test queue. Setting Portal takes place in the Aperture Science Laboratories Computer-Aided Enrichment Center—Aperture Science for short—which is a research facility responsible for the creation of the portal gun. According to information presented in Portal 2, the location of the complex is in the Upper Peninsula of Michigan. Aperture Science exists in the same universe as the Half-Life series, although connections between the two franchises are limited to references. Information about the company, developed by Valve for creating the setting of the game, is revealed during the game and via the real-world promotional website. According to the Aperture Science website, Cave Johnson founded the company in 1943 for the sole purpose of making shower curtains for the U.S. military. However, after becoming mentally unstable from "moon rock poisoning" in 1978, Johnson created a three-tier research and development plan to make his organization successful. The first two tiers, the Counter-Heimlich Maneuver (a maneuver designed to ensure choking) and the Take-A-Wish Foundation (a program to give the wishes of terminally ill children to adults in need of dreams), were commercial failures and led to an investigation of the company by the U.S. Senate. However, when the investigative committee heard of the success of the third tier—a person-sized, ad hoc quantum tunnel through physical space, with a possible application as a shower curtain—it recessed permanently and gave Aperture Science an open-ended contract to continue its research. The development of GLaDOS, an artificially intelligent research assistant and disk-operating system, began in 1986 in response to Black Mesa's work on similar portal technology. A presentation seen during gameplay reveals that GLaDOS was also included in a proposed bid for de-icing fuel lines, incorporated as a fully functional disk-operation system that is arguably alive, unlike Black Mesa's proposal, which inhibits ice, nothing more. Roughly thirteen years later, work on GLaDOS was completed and the untested AI was activated during the company's bring-your-daughter-to-work day in May 2000. Immediately after activation, the facility was flooded with deadly neurotoxin by the AI. Events of the first Half-Life game occur shortly after that, presumably leaving the facility forgotten by the outside world due to apocalyptic happenings. Wolpaw, in describing the ending of Portal 2, affirmed that the Combine invasion, chronologically taking place after Half-Life and before Half-Life 2, had occurred before Portal 2s events. The areas of the Enrichment Center that Chell explores suggest that it is part of a massive research installation. At the time of events depicted in Portal, the facility seems to be long-deserted, although most of its equipment remains operational without human control. During its development, Half-Life 2: Episode Two featured a chapter set on Aperture Science's icebreaker ship Borealis, but this was abandoned and removed before release. Plot Portal plot is revealed to the player via audio messages or "announcements" from GLaDOS and visual elements inside rooms found in later levels. According to The Final Hours of Portal 2, the year is established to be "somewhere in 2010"—twelve years after Aperture Science has been abandoned. The game begins with Chell waking up from a stasis bed and hearing instructions and warnings from GLaDOS, an artificial intelligence, about upcoming tests. Chell enters into distinct chambers that introduce players to the game's mechanics, sequentially. GLaDOS's announcements serve as instructions to Chell and help the player progress through the game, but also develops the atmosphere and characterizes the AI as a person. Chell is promised cake as her reward if she completes all test chambers. Chell proceeds through the empty Enrichment Center, with GLaDOS as her only interaction. As the player nears completion, GLaDOS's motives and behavior turn more sinister; although she is designed to appear encouraging, GLaDOS's actions and speech suggest insincerity and callous disregard for the safety and well-being of the test subjects. The test chambers become increasingly dangerous as Chell proceeds, tests including a live-fire course designed for military androids, as well as some chambers flooded with a hazardous liquid. In another chamber, GLaDOS notes the importance of the Weighted Companion Cube, a waist-high crate with a single large pink heart on each face, for helping Chell to complete the test. However, GLaDOS declares it must be euthanized in an "emergency intelligence incinerator" before Chell can continue. Some later chambers include automated turrets with childlike voices (also voiced by McLain) that fire at Chell, only to sympathize with her after being destroyed or disabled. After Chell completes the final test chamber, GLaDOS maneuvers Chell into an incinerator in an attempt to kill her. Chell escapes with the portal gun and makes her way through the maintenance areas within the Enrichment Center. GLaDOS panics and insists that she was pretending to kill Chell as part of testing. GLaDOS then asks Chell to assume the "party escort submission position", lying face-first on the ground, so that a "party associate" can take her to her reward, but Chell continues her escape. In this section, GLaDOS communes with Chell as it becomes clear the AI had killed everyone else in the center. Chell makes her way through the maintenance areas and empty office spaces behind the chambers, sometimes following graffiti messages which point in the right direction. These backstage areas, which are in an extremely dilapidated state, stand in stark contrast to the pristine test chambers. The graffiti includes statements such as "the cake is a lie", and pastiches of Emily Dickinson's poem "The Chariot", Henry Wadsworth Longfellow's "The Reaper and the Flowers", and Emily Brontë's "No Coward Soul Is Mine", referring to and mourning the death of the Companion Cube. GLaDOS attempts to dissuade Chell with threats of physical harm and misleading statements as Chell makes her way deeper into the maintenance areas. Chell reaches a large chamber where GLaDOS's hardware hangs overhead. GLaDOS continues to threaten Chell, but during the exchange, a sphere falls off GLaDOS and Chell drops it in an incinerator. GLaDOS reveals that Chell has just destroyed the morality core of her conscience, one of the multiple "personality cores" that Aperture Science employees installed after the AI flooded the enrichment center with neurotoxin gas. With it removed, she can access its emitters again. A six-minute countdown starts as Chell dislodges and incinerates more of GLaDOS' personality cores, while GLaDOS discourages her both verbally, with taunts and juvenile insults, and physically by firing rockets at her. After Chell destroys the last personality core, a malfunction tears the room apart and transports everything to the surface. Chell is then seen lying outside the facility's gates amid the remains of GLaDOS. Following the announcement of Portal 2, the ending was expanded in a later update. In this retroactive continuity, Chell is dragged away from the scene by an unseen entity speaking in a robotic voice, thanking her for assuming the "party escort submission position". The final scene, after a long and speedy zoom through the bowels of the facility, shows a Black Forest cake, and the Weighted Companion Cube, surrounded by a mix of shelves containing dozens of apparently inactive personality cores. The cores begin to light up, before a robotic arm descends and extinguishes the candle on the cake, causing the room to black out. As the credits roll, GLaDOS delivers a concluding report: the song "Still Alive", which declares the experiment to be a huge success, as well as serving to indicate to the player that GLaDOS is still alive. Development Concept Portal is Valve's spiritual successor to the freeware game Narbacular Drop, the 2005 independent game released by students of the DigiPen Institute of Technology; the original Narbacular Drop team was subsequently hired by Valve. Valve became interested in Narbacular Drop after seeing the game at DigiPen's annual career fair; Robin Walker, one of Valve's developers, saw the game at the fair and later contacted the team providing them with advice and offering to show their game at Valve's offices. After their presentation, Valve's president Gabe Newell quickly offered the entire team jobs at Valve to develop the game further. Newell later commented that he was impressed with the DigiPen team as "they had actually carried the concept through", already having included the interaction between portals and physics, completing most of the work that Valve would have had to commit on their own. To test the effectiveness of the portal mechanic, the team made a prototype in an in-house 2D game engine that is used in DigiPen. Certain elements have been retained from Narbacular Drop, such as the system of identifying the two unique portal endpoints with the colors orange and blue. A key difference in the signature portal mechanic between the two games, however, is that Portals portal gun cannot create a portal through an existing portal unlike in Narbacular Drop. The game's original setting, of a princess trying to escape a dungeon, was dropped in favor of the Aperture Science approach. Portal took approximately two years and four months to complete after the DigiPen team was brought into Valve, and no more than ten people were involved with its development. Portal writer Erik Wolpaw, who, along with fellow writer Chet Faliszek, was hired by Valve for the game, claimed that "Without the constraints, Portal would not be as good a game". For the first year of development, the team focused mostly on the gameplay without any narrative structure. Playtesters found the game to be fun but asked about what these test chambers were leading towards. This prompted the team to come up with a narrative for Portal. The Portal team worked with Half-Life series writer Marc Laidlaw on fitting the game into the series' plot. This was done, in part, due to the limited art capabilities of the small team; instead of creating new assets for Portal, they decided to tie the game to an existing franchise—Half-Life—to allow them to reuse the Half-Life 2 art assets. Wolpaw and Faliszek were put to work on the dialogue for Portal. The concept of a computer AI guiding the player through experimental facilities to test the portal gun was arrived at early in the writing process. They drafted early lines for the yet-named "polite" AI with humorous situations, such as requesting the player's character to "assume the party escort submission position", and found this style of approach to be well-suited to the game they wanted to create, ultimately leading to the creation of the GLaDOS character. GLaDOS was central to the plot, as Wolpaw notes "We designed the game to have a very clear beginning, middle, and end, and we wanted GLaDOS to go through a personality shift at each of these points." Wolpaw further describes the idea of using cake as the reward came about as "at the beginning of the Portal development process, we sat down as a group to decide what philosopher or school of philosophy our game would be based on. That was followed by about 15 minutes of silence and then someone mentioned that a lot of people like cake." The cake element, along with additional messages given to the player in the behind-the-scenes areas, were written and drawn by Kim Swift. Design The austere settings in the game came about because testers spent too much time trying to complete the puzzles using decorative but non-functional elements. As a result, the setting was minimized to make the usable aspects of the puzzle easier to spot, using the clinical feel of the setting in the film The Island as reference. While there were plans for a third area, an office space, to be included after the test chambers and the maintenance areas, the team ran out of time to include it. They also dropped the introduction of the Rat Man, a character who left the messages in the maintenance areas, to avoid creating too much narrative for the game, though the character was developed further in a tie-in comic "Lab Rat", that ties Portal and Portal 2s story together. According to project lead Kim Swift, the final battle with GLaDOS went through many iterations, including having the player chased by James Bond lasers, which was partially applied to the turrets, Portal Kombat where the player would have needed to redirect rockets while avoiding turret fire, and a chase sequence following a fleeing GLaDOS. Eventually, they found that playtesters enjoyed a rather simple puzzle with a countdown timer near the end; Swift noted, "Time pressure makes people think something is a lot more complicated than it really is", and Wolpaw admitted, "It was really cheap to make [the neurotoxin gas]" in order to simplify the dialogue during the battle. Chell's face and body are modeled after Alésia Glidewell, an American freelance actress and voice-over artist, selected by Valve from a local modeling agency for her face and body structure. Ellen McLain provided the voice of the antagonist GLaDOS. Erik Wolpaw noted, "When we were still fishing around for the turret voice, Ellen did a sultry version. It didn't work for the turrets, but we liked it a lot, and so a slightly modified version of that became the model for GLaDOS's final incarnation." The Weighted Companion Cube inspiration was from project lead Kim Swift with additional input from Wolpaw from reading some "declassified government interrogation thing" whereby "isolation leads subjects to begin to attach to inanimate objects"; Swift commented, "We had a long level called Box Marathon; we wanted players to bring this box with them from the beginning to the end. But people would forget about the box, so we added dialogue, applied the heart to the cube, and continued to up the ante until people became attached to the box. Later on, we added the incineration idea. The artistic expression grew from the gameplay." Wolpaw further noted that the need to incinerate the Weighted Companion Cube came as a result of the final boss battle design; they recognized they had not introduced the idea of incineration necessary to complete the boss battle, and by training the player to do it with the Weighted Companion Cube, found the narrative "way stronger" with its "death". Swift noted that any similarities to psychological situations in the Milgram experiment or 2001: A Space Odyssey are entirely coincidental. The portal gun's full name, Aperture Science Handheld Portal Device, can be abbreviated as ASHPD, which resembles a shortening of the name Adrian Shephard, the protagonist of Half-Life: Opposing Force. Fans noticed this similarity before the game's release; as a result, the team placed a red herring in the game by having the letters of Adrian Shephard highlighted on keyboards found within the game. According to Kim Swift, the cake is a Black Forest cake that she thought looked the best at the nearby Regent Bakery and Café in Redmond, Washington, and, as an Easter egg within the game, its recipe is scattered among various screens showing lines of binary code. The Regent Bakery has stated that since the release of the game, its Black Forest cake has been one of its more popular items. Soundtrack Most of the game's soundtrack is non-lyrical ambient music composed by Kelly Bailey and Mike Morasky, somewhat dark and mysterious to match the mood of the environments. The closing credits song, "Still Alive", was written by Jonathan Coulton and sung by Ellen McLain (a classically-trained operatic soprano) as the GLaDOS character. A brief instrumental version of "Still Alive" is played in an uptempo Latin style over radios in-game. Wolpaw notes that Coulton was invited to Valve a year before the release of Portal, though it was not yet clear where Coulton would contribute. "Once Kim [Swift] and I met with him, it quickly became apparent that he had the perfect sensibility to write a song for GLaDOS." The use of the song over the closing credits was based on a similar concept from the game God Hand, one of Wolpaw's favorite titles. The song was released as a free downloadable song for the music video game Rock Band on April 1, 2008. The soundtrack for Portal was released as a part of The Orange Box Original Soundtrack. Portal soundtrack was released as part of a four-disc retail release, Portal 2: Songs To Test By (Collector's Edition), on October 30, 2012, featuring music from both games. The game's soundtrack became available via Steam Music on September 24, 2014. Release Portal was first released as part of The Orange Box for Windows and Xbox 360 on October 10, 2007, and for the PlayStation 3 on December 11, 2007. The Windows version of the game is also available for download separately through Valve's content delivery system, Steam, and was released as a standalone retail product on April 9, 2008. In addition to Portal, the Box also included Half-Life 2 and its two add-on episodes, as well as Team Fortress 2. Portals inclusion within the Box was considered an experiment by Valve; having no idea of the success of Portal, the Box provided it a "safety net" via means of these other games. Portal was kept to a modest length in case the game did not go over well with players. In January 2008, Valve released a special demo version titled Portal: The First Slice, free for any Steam user using Nvidia graphics hardware as part of a collaboration between the two companies. It also comes packaged with Half-Life 2: Deathmatch, Peggle Extreme, and Half-Life 2: Lost Coast. The demo includes test chambers 00 to 10 (eleven in total). Valve has since made the demo available to all Steam users. Portal was the first Valve-developed game to be added to the OS X-compatible list of games available on the launch of the Steam client for Mac on May 12, 2010, supporting Steam Play, in which players that had bought the game either on a Macintosh or Windows computer could also play it on the alternate system. As part of the promotion, Portal was offered as a free title for any Steam user during the two weeks following the Mac client's launch. Within the first week of this offer, over 1.5 million copies of the game were downloaded through Steam. A similar promotion was held in September 2011, near the start of a traditional school year, encouraging the use of the game as an educational tool for science and mathematics. Valve wrote that they felt that Portal "makes physics, math, logic, spatial reasoning, probability, and problem-solving interesting, cool, and fun", a necessary feature to draw children into learning. This was tied to Digital Promise, a United States Department of Education initiative to help develop new digital tools for education, and which Valve is part of. Portal: Still Alive was announced as an exclusive Xbox Live Arcade game at the 2008 E3 convention, and was released on October 22, 2008. It features the original game, 14 new challenges, and new achievements. The additional content was based on levels from the map pack Portal: The Flash Version created by We Create Stuff and contains no additional story-related levels. According to Valve spokesman Doug Lombardi, Microsoft had previously rejected Portal on the platform due to its large size. Portal: Still Alive was well received by reviewers. 1UP.com's Andrew Hayward stated that, with the easier access and lower cost than paying for The Orange Box, Portal is now "stronger than ever". IGN editor Cam Shea ranked it fifth on his top 10 list of Xbox Live Arcade games. He stated that it was debatable whether an owner of The Orange Box should purchase this, as its added levels do not add to the plot. However, he praised the quality of the new maps included in the game. The game ranked 7th in a later list of top Xbox Live Arcade titles compiled by IGN's staff in September 2010. During 2014 GPU Technology Conference on March 25, 2014, Nvidia announced a port of Portal to the Nvidia Shield, their Android handheld; the port was released on May 12, 2014. Alongside Portal 2, Portal is to be released on the Nintendo Switch in 2022 as part of Portal: Companion Collection, developed by Valve and Nvidia Lightspeed Studios. Reception Portal received critical acclaim, often earning more praise than either Half-Life 2: Episode Two or Team Fortress 2, two titles also included in The Orange Box. It was praised for its unique gameplay and dark, deadpan humor. Eurogamer cited that "the way the game progresses from being a simple set of perfunctory tasks to a full-on part of the Half-Life story is absolute genius", while GameSpy noted, "What Portal lacks in length, it more than makes up for in exhilaration." The game was criticized for sparse environments, and both criticized and praised for its short length. Aggregate reviews for the standalone PC version of Portal gave the game a 90/100 through 28 reviews on Metacritic. In 2011, Valve stated that Portal had sold more than four million copies through the retail versions, including the standalone game and The Orange Box, and from the Xbox Live Arcade version. The game generated a fan following for the Weighted Companion Cube—even though the cube itself does not talk or act in the game. Fans have created plush and papercraft versions of the cube and the various turrets, as well as PC case mods and models of the Portal cake and portal gun. Jeep Barnett, a programmer for Portal, noted that players have told Valve that they had found it more emotional to incinerate the Weighted Companion Cube than to harm one of the "Little Sisters" from BioShock. Both GLaDOS and the Weighted Companion Cube were nominated for the Best New Character Award on G4, with GLaDOS winning the award for "having lines that will be quoted by gamers for years to come." Ben Croshaw of Zero Punctuation praised the game as "absolutely sublime from start to finish ... I went in expecting a slew of interesting portal-based puzzles and that's exactly what I got, but what I wasn't expecting was some of the funniest pitch black humor I've ever heard in a game". He felt the short length was ideal as it did not outstay its welcome. Writing for GameSetWatch in 2009, columnist Daniel Johnson pointed out similarities between Portal and Erving Goffman's essay on dramaturgy, The Presentation of Self in Everyday Life, which equates one's persona to the front and backstage areas of a theater. The game was also made part of the required course material among other classical and contemporary works, including Goffman's work, for a freshman course "devoted to engaging students with fundamental questions of humanity from multiple perspectives and fostering a sense of community" for Wabash College in 2010. Portal has also been cited as a strong example of instructional scaffolding that can be adapted for more academic learning situations, as the player, through careful design of levels by Valve, is first hand-held in solving simple puzzles with many hints at the correct solution, but this support is slowly removed as the player progresses in the game, and completely removed when the player reaches the second half of the game. Rock, Paper, Shotgun's Hamish Todd considered Portal as an exemplary means of game design by demonstrating a series of chambers after the player has obtained the portal gun that gently introduce the concept of flinging without any explicit instructions. Portal was exhibited at the Smithsonian Art Exhibition in America from February 14 through September 30, 2012. Portal won the "Action" section for the platform "Modern Windows". Since its release Portal is still considered one of the best video games of all time, having been included on several cumulative "Top Games of All Time" lists through 2018. Awards Portal won several awards: At the 2008 Game Developers Choice Awards, Portal won Game of the Year award, along with the Innovation Award and Best Game Design award. IGN honored Portal with several awards, for Best Puzzle Game for PC and Xbox 360, Most Innovative Design for PC, and Best End Credit Song (for "Still Alive") for Xbox 360, along with overall honors for Best Puzzle Game and Most Innovative Design. In its Best of 2007, GameSpot honored The Orange Box with 4 awards in recognition of Portal, giving out honors for Best Puzzle Game, Best New Character(s) (for GLaDOS), Funniest Game, and Best Original Game Mechanic (for the portal gun). Portal was awarded Game of the Year (PC), Best Narrative (PC), and Best Innovation (PC and console) honors by 1UP.com in its 2007 editorial awards. GamePro honored the game for Most Memorable Villain (for GLaDOS) in its Editors' Choice 2007 Awards. Portal was awarded the Game of the Year award in 2007 by Joystiq, Good Game, and Shacknews. The Most Original Game award by X-Play. In Official Xbox Magazine 2007 Game of the Year Awards, Portal won Best New Character (for GLaDOS), Best Original Song (for "Still Alive"), and Innovation of the Year. In GameSpy's 2007 Game of the Year awards, Portal was recognized as Best Puzzle Game, Best Character (for GLaDOS), and Best Sidekick (for the Weighted Companion Cube). The A.V. Club called it the Best Game of 2007. The webcomic Penny Arcade awarded Portal Best Soundtrack, Best Writing, and Best New Game Mechanic in its satirical 2007 We're Right Awards. Eurogamer gave Portal first place in its Top 50 Games of 2007 rankings. IGN also placed GLaDOS, (from Portal) as the Video Game Villain on its Top-100 Villains List. GamesRadar named it the best game of all time. In November 2012, Time named it one of the 100 greatest video games of all time. Wired considered Portal to be one of the most influential games of the first decade of the 21st century, believing it to be the prime example of quality over quantity for video games. Legacy The popularity of the game and its characters led Valve to develop merchandise for Portal made available through its online Valve physical merchandise store. Some of the more popular items were the Weighted Companion Cube plush toys and fuzzy dice. When first released, both were sold out in under 24 hours. Other products available through the Valve store include T-shirts and Aperture Science coffee mugs and parking stickers, and merchandise relating to the phrase "the cake is a lie", which has become an internet meme. Wolpaw noted they did not expect certain elements of the game to be as popular as they were, while other elements they had expected to become fads were ignored, such as a giant hoop that rolls on-screen during the final scene of the game that the team had named Hoopy. A modding community has developed around Portal, with users creating their own test chambers and other in-game modifications. The group "We Create Stuff" created an Adobe Flash version of Portal, titled Portal: The Flash Version, just before release of The Orange Box. This flash version was well received by the community and the group has since converted it to a map pack for the published game. Another mod, Portal: Prelude, is an unofficial prequel developed by an independent team of three that focuses on the pre-GLaDOS era of Aperture Science, and contains nineteen additional "crafty and challenging" test chambers. An ASCII version of Portal was created by Joe Larson. An unofficial port of Portal to the iPhone using the Unity game engine was created but only consisted of a single room from the game. Mari0 is a fan-made four-player coop mashup of the original Super Mario Bros. and Portal. Swift stated that future Portal developments would depend on the community's reactions, saying, "We're still playing it by ear at this point, figuring out if we want to do multiplayer next, or Portal 2, or release map packs." Some rumors regarding a sequel arose due to casting calls for voice actors. On March 10, 2010, Portal 2 was officially announced for a release late in that year; the announcement was preceded by an alternate reality game based on unexpected patches made to Portal that contained cryptic messages in relation to Portal 2s announcement, including an update to the game, creating a different ending for the fate of Chell. The original game left her in a deserted parking lot after destroying GLaDOS, but the update involved Chell being dragged back into the facility by a "Party Escort Bot". Though Portal 2 was originally announced for a Q4 2010 release, the game was released on April 19, 2011. References Footnotes Bibliography External links Official website – The Orange Box 2007 video games Android (operating system) games Fiction with unreliable narrators Interactive Achievement Award winners Internet memes Laboratories in fiction Linux games MacOS games Mass murder in fiction Physics in fiction PlayStation 3 games Portal (series) Puzzle-platform games Science fiction video games Single-player video games Source (game engine) games Teleportation in fiction Valve Corporation games Video game memes Video games about artificial intelligence Video games developed in the United States Video games featuring female protagonists Video games scored by Kelly Bailey Video games scored by Mike Morasky Video games set in 2010 Video games set in Michigan Video games using Havok Video games with commentaries Windows games Xbox 360 games Xbox 360 Live Arcade games
33734529
https://en.wikipedia.org/wiki/Visual%20arts
Visual arts
The visual arts are art forms such as painting, drawing, printmaking, sculpture, ceramics, photography, video, filmmaking, design, crafts and architecture. Many artistic disciplines such as performing arts, conceptual art, and textile arts also involve aspects of visual arts as well as arts of other types. Also included within the visual arts are the applied arts such as industrial design, graphic design, fashion design, interior design and decorative art. Current usage of the term "visual arts" includes fine art as well as the applied or decorative arts and crafts, but this was not always the case. Before the Arts and Crafts Movement in Britain and elsewhere at the turn of the 20th century, the term 'artist' had for some centuries often been restricted to a person working in the fine arts (such as painting, sculpture, or printmaking) and not the decorative arts, craft, or applied Visual arts media. The distinction was emphasized by artists of the Arts and Crafts Movement, who valued vernacular art forms as much as high forms. Art schools made a distinction between the fine arts and the crafts, maintaining that a craftsperson could not be considered a practitioner of the arts. The increasing tendency to privilege painting, and to a lesser degree sculpture, above other arts has been a feature of Western art as well as East Asian art. In both regions painting has been seen as relying to the highest degree on the imagination of the artist, and the furthest removed from manual labour – in Chinese painting the most highly valued styles were those of "scholar-painting", at least in theory practiced by gentleman amateurs. The Western hierarchy of genres reflected similar attitudes. Education and training Training in the visual arts has generally been through variations of the apprentice and workshop systems. In Europe the Renaissance movement to increase the prestige of the artist led to the academy system for training artists, and today most of the people who are pursuing a career in arts train in art schools at tertiary levels. Visual arts have now become an elective subject in most education systems. Drawing Drawing is a means of making an image, illustration or graphic using any of a wide variety of tools and techniques available online and offline. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface using dry media such as graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools, including pens, stylus, Apple pencil that simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, shading, scribbling, stippling, and blending. An artist who excels in drawing is referred to as a draftsman or draughtsman. Drawing and painting goes back tens of thousands of years. Art of the Upper Paleolithic includes figurative art beginning between about 40,000 to 35,000 years ago. Non-figurative cave paintings consisting of hand stencils and simple geometric shapes are even older. Paleolithic cave representations of animals are found in areas such as Lascaux, France and Altamira, Spain in Europe, Maros, Sulawesi in Asia, and Gabarnmung, Australia. In ancient Egypt, ink drawings on papyrus, often depicting people, were used as models for painting or sculpture. Drawings on Greek vases, initially geometric, later developed to the human form with black-figure pottery during the 7th century BC. With paper becoming common in Europe by the 15th century, drawing was adopted by masters such as Sandro Botticelli, Raphael, Michelangelo, and Leonardo da Vinci who sometimes treated drawing as an art in its own right rather than a preparatory stage for painting or sculpture. Painting Painting taken literally is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense it means the use of this activity in combination with drawing, composition, or other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting is also used to express spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to The Sistine Chapel to the human body itself. History Origins and early history Like drawing, painting has its documented origins in caves and on rock faces. The finest examples, believed by some to be 32,000 years old, are in the Chauvet and Lascaux caves in southern France. In shades of red, brown, yellow and black, the paintings on the walls and ceilings are of bison, cattle, horses and deer. Paintings of human figures can be found in the tombs of ancient Egypt. In the great temple of Ramses II, Nefertari, his queen, is depicted being led by Isis. The Greeks contributed to painting but much of their work has been lost. One of the best remaining representations are the Hellenistic Fayum mummy portraits. Another example is mosaic of the Battle of Issus at Pompeii, which was probably based on a Greek painting. Greek and Roman art contributed to Byzantine art in the 4th century BC, which initiated a tradition in icon painting. The Renaissance Apart from the illuminated manuscripts produced by monks during the Middle Ages, the next significant contribution to European art was from Italy's renaissance painters. From Giotto in the 13th century to Leonardo da Vinci and Raphael at the beginning of the 16th century, this was the richest period in Italian art as the chiaroscuro techniques were used to create the illusion of 3-D space. Painters in northern Europe too were influenced by the Italian school. Jan van Eyck from Belgium, Pieter Bruegel the Elder from the Netherlands and Hans Holbein the Younger from Germany are among the most successful painters of the times. They used the glazing technique with oils to achieve depth and luminosity. Dutch masters The 17th century witnessed the emergence of the great Dutch masters such as the versatile Rembrandt who was especially remembered for his portraits and Bible scenes, and Vermeer who specialized in interior scenes of Dutch life. Baroque The Baroque started after the Renaissance, from the late 16th century to the late 17th century. Main artists of the Baroque included Caravaggio, who made heavy use of tenebrism. Peter Paul Rubens, a Flemish painter who studied in Italy, worked for local churches in Antwerp and also painted a series for Marie de' Medici. Annibale Carracci took influences from the Sistine Chapel and created the genre of illusionistic ceiling painting. Much of the development that happened in the Baroque was because of the Protestant Reformation and the resulting Counter Reformation. Much of what defines the Baroque is dramatic lighting and overall visuals. Impressionism Impressionism began in France in the 19th century with a loose association of artists including Claude Monet, Pierre-Auguste Renoir and Paul Cézanne who brought a new freely brushed style to painting, often choosing to paint realistic scenes of modern life outside rather than in the studio. This was achieved through a new expression of aesthetic features demonstrated by brush strokes and the impression of reality. They achieved intense colour vibration by using pure, unmixed colours and short brush strokes. The movement influenced art as a dynamic, moving through time and adjusting to newfound techniques and perception of art. Attention to detail became less of a priority in achieving, whilst exploring a biased view of landscapes and nature to the artists eye. Post-impressionism Towards the end of the 19th century, several young painters took impressionism a stage further, using geometric forms and unnatural colour to depict emotions while striving for deeper symbolism. Of particular note are Paul Gauguin, who was strongly influenced by Asian, African and Japanese art, Vincent van Gogh, a Dutchman who moved to France where he drew on the strong sunlight of the south, and Toulouse-Lautrec, remembered for his vivid paintings of night life in the Paris district of Montmartre. Symbolism, expressionism and cubism Edvard Munch, a Norwegian artist, developed his symbolistic approach at the end of the 19th century, inspired by the French impressionist Manet. The Scream (1893), his most famous work, is widely interpreted as representing the universal anxiety of modern man. Partly as a result of Munch's influence, the German expressionist movement originated in Germany at the beginning of the 20th century as artists such as Ernst Kirschner and Erich Heckel began to distort reality for an emotional effect. In parallel, the style known as cubism developed in France as artists focused on the volume and space of sharp structures within a composition. Pablo Picasso and Georges Braque were the leading proponents of the movement. Objects are broken up, analyzed, and re-assembled in an abstracted form. By the 1920s, the style had developed into surrealism with Dali and Magritte. Printmaking Printmaking is creating, for artistic purposes, an image on a matrix that is then transferred to a two-dimensional (flat) surface by means of ink (or another form of pigmentation). Except in the case of a monotype, the same matrix can be used to produce many examples of the print. Historically, the major techniques (also called media) involved are woodcut, line engraving, etching, lithography, and screen printing (serigraphy, silk screening) but there are many others, including modern digital techniques. Normally, the print is printed on paper, but other mediums range from cloth and vellum to more modern materials. European history Prints in the Western tradition produced before about 1830 are known as old master prints. In Europe, from around 1400 AD woodcut, was used for master prints on paper by using printing techniques developed in the Byzantine and Islamic worlds. Michael Wolgemut improved German woodcut from about 1475, and Erhard Reuwich, a Dutchman, was the first to use cross-hatching. At the end of the century Albrecht Dürer brought the Western woodcut to a stage that has never been surpassed, increasing the status of the single-leaf woodcut. Chinese origin and practice In China, the art of printmaking developed some 1,100 years ago as illustrations alongside text cut in woodblocks for printing on paper. Initially images were mainly religious but in the Song Dynasty, artists began to cut landscapes. During the Ming (1368–1644) and Qing (1616–1911) dynasties, the technique was perfected for both religious and artistic engravings. Development in Japan 1603–1867 Woodblock printing in Japan (Japanese: 木版画, moku hanga) is a technique best known for its use in the ukiyo-e artistic genre; however, it was also used very widely for printing illustrated books in the same period. Woodblock printing had been used in China for centuries to print books, long before the advent of movable type, but was only widely adopted in Japan during the Edo period (1603–1867). Although similar to woodcut in western printmaking in some regards, moku hanga differs greatly in that water-based inks are used (as opposed to western woodcut, which uses oil-based inks), allowing for a wide range of vivid color, glazes and color transparency. Photography Photography is the process of making pictures by means of the action of light. The light patterns reflected or emitted from objects are recorded onto a sensitive medium or storage chip through a timed exposure. The process is done through mechanical shutters or electronically timed exposure of photons into chemical processing or digitizing devices known as cameras. The word comes from the Greek φως phos ("light"), and γραφις graphis ("stylus", "paintbrush") or γραφη graphê, together meaning "drawing with light" or "representation by means of lines" or "drawing." Traditionally, the product of photography has been called a photograph. The term photo is an abbreviation; many people also call them pictures. In digital photography, the term image has begun to replace photograph. (The term image is traditional in geometric optics.) Architecture Architecture is the process and the product of planning, designing, and constructing buildings or any other structures. Architectural works, in the material form of buildings, are often perceived as cultural symbols and as works of art. Historical civilizations are often identified with their surviving architectural achievements. The earliest surviving written work on the subject of architecture is De architectura, by the Roman architect Vitruvius in the early 1st century AD. According to Vitruvius, a good building should satisfy the three principles of firmitas, utilitas, venustas, commonly known by the original translation – firmness, commodity and delight. An equivalent in modern English would be: Durability – a building should stand up robustly and remain in good condition. Utility – it should be suitable for the purposes for which it is used. Beauty – it should be aesthetically pleasing. Building first evolved out of the dynamics between needs (shelter, security, worship, etc.) and means (available building materials and attendant skills). As human cultures developed and knowledge began to be formalized through oral traditions and practices, building became a craft, and "architecture" is the name given to the most highly formalized and respected versions of that craft. Filmmaking Filmmaking is the process of making a motion-picture, from an initial conception and research, through scriptwriting, shooting and recording, animation or other special effects, editing, sound and music work and finally distribution to an audience; it refers broadly to the creation of all types of films, embracing documentary, strains of theatre and literature in film, and poetic or experimental practices, and is often used to refer to video-based processes as well Computer art Visual artists are no longer limited to traditional Visual arts media. Computers have been used as an ever more common tool in the visual arts since the 1960s. Uses include the capturing or creating of images and forms, the editing of those images and forms (including exploring multiple compositions) and the final rendering or printing (including 3D printing). Computer art is any in which computers played a role in production or display. Such art can be an image, sound, animation, video, CD-ROM, DVD, video game, website, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers have been blurred. For instance, an artist may combine traditional painting with algorithmic art and other digital techniques. As a result, defining computer art by its end product can be difficult. Nevertheless, this type of art is beginning to appear in art museum exhibits, though it has yet to prove its legitimacy as a form unto itself and this technology is widely seen in contemporary art more as a tool rather than a form as with painting. On the other hand, there are computer-based artworks which belong to a new conceptual and postdigital strand, assuming the same technologies, and their social impact, as an object of inquiry. Computer usage has blurred the distinctions between illustrators, photographers, photo editors, 3-D modelers, and handicraft artists. Sophisticated rendering and editing software has led to multi-skilled image developers. Photographers may become digital artists. Illustrators may become animators. Handicraft may be computer-aided or use computer-generated imagery as a template. Computer clip art usage has also made the clear distinction between visual arts and page layout less obvious due to the easy access and editing of clip art in the process of paginating a document, especially to the unskilled observer. Plastic arts Plastic arts is a term for art forms that involve physical manipulation of a plastic medium by moulding or modeling such as sculpture or ceramics. The term has also been applied to all the visual (non-literary, non-musical) arts. Materials that can be carved or shaped, such as stone or wood, concrete or steel, have also been included in the narrower definition, since, with appropriate tools, such materials are also capable of modulation. This use of the term "plastic" in the arts should not be confused with Piet Mondrian's use, nor with the movement he termed, in French and English, "Neoplasticism." Sculpture Sculpture is three-dimensional artwork created by shaping or combining hard or plastic material, sound, or text and or light, commonly stone (either rock or marble), clay, metal, glass, or wood. Some sculptures are created directly by finding or carving; others are assembled, built together and fired, welded, molded, or cast. Sculptures are often painted. A person who creates sculptures is called a sculptor. Because sculpture involves the use of materials that can be moulded or modulated, it is considered one of the plastic arts. The majority of public art is sculpture. Many sculptures together in a garden setting may be referred to as a sculpture garden. Sculptors do not always make sculptures by hand. With increasing technology in the 20th century and the popularity of conceptual art over technical mastery, more sculptors turned to art fabricators to produce their artworks. With fabrication, the artist creates a design and pays a fabricator to produce it. This allows sculptors to create larger and more complex sculptures out of material like cement, metal and plastic, that they would not be able to create by hand. Sculptures can also be made with 3-d printing technology. US copyright definition of visual art In the United States, the law protecting the copyright over a piece of visual art gives a more restrictive definition of "visual art". A "work of visual art" is — (1) a painting, drawing, print or sculpture, existing in a single copy, in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author, or, in the case of a sculpture, in multiple cast, carved, or fabricated sculptures of 200 or fewer that are consecutively numbered by the author and bear the signature or other identifying mark of the author; or (2) a still photographic image produced for exhibition purposes only, existing in a single copy that is signed by the author, or in a limited edition of 200 copies or fewer that are signed and consecutively numbered by the author. A work of visual art does not include — (A)(i) any poster, map, globe, chart, technical drawing, diagram, model, applied art, motion picture or other audiovisual work, book, magazine, newspaper, periodical, data base, electronic information service, electronic publication, or similar publication;   (ii) any merchandising item or advertising, promotional, descriptive, covering, or packaging material or container;   (iii) any portion or part of any item described in clause (i) or (ii); (B) any work made for hire; or (C) any work not subject to copyright protection under this title. See also Art materials Asemic writing Collage Crowdsourcing creative work Décollage Environmental art Found object Graffiti History of art Illustration Installation art Interactive art Landscape art Mathematics and art Mixed media Portraiture Process art Recording medium Sketch (drawing) Sound art Vexillography Video art Visual arts and Theosophy Visual impairment in art Visual poetry References Bibliography Barnes, A. C., The Art in Painting, 3rd ed., 1937, Harcourt, Brace & World, Inc., NY. Bukumirovic, D. (1998). Maga Magazinovic. Biblioteka Fatalne srpkinje knj. br. 4. Beograd: Narodna knj. Fazenda, M. J. (1997). Between the pictorial and the expression of ideas: the plastic arts and literature in the dance of Paula Massano. n.p. Gerón, C. (2000). Enciclopedia de las artes plásticas dominicanas: 1844–2000. 4th ed. Dominican Republic s.n. Oliver Grau (Ed.): MediaArtHistories. MIT-Press, Cambridge 2007. with Rudolf Arnheim, Barbara Stafford, Sean Cubitt, W. J. T. Mitchell, Lev Manovich, Christiane Paul, Peter Weibel a.o. Rezensionen Laban, R. V. (1976). The language of movement: a guidebook to choreutics. Boston: Plays. La Farge, O. (1930). Plastic prayers: dances of the Southwestern Indians. n.p. Restany, P. (1974). Plastics in arts. Paris, New York: n.p. University of Pennsylvania. (1969). Plastics and new art. Philadelphia: The Falcon Pr. External links ArtLex – online dictionary of visual art terms. Calendar for Artists – calendar listing of visual art festivals. Art History Timeline by the Metropolitan Museum of Art. Arts by type Communication design Visual arts media
3922260
https://en.wikipedia.org/wiki/SquashFS
SquashFS
Squashfs is a compressed read-only file system for Linux. Squashfs compresses files, inodes and directories, and supports block sizes from 4 KiB up to 1 MiB for greater compression. Several compression algorithms are supported. Squashfs is also the name of free software, licensed under the GPL, for accessing Squashfs filesystems. Squashfs is intended for general read-only file-system use and in constrained block-device memory systems (e.g. embedded systems) where low overhead is needed. Uses Squashfs is used by the Live CD versions of Arch Linux, Debian, Fedora, Gentoo Linux, HoleOS, Linux Mint, openSUSE, Salix, Ubuntu, NixOS, Clonezilla and on embedded distributions such as the OpenWrt and DD-WRT router firmware. It is also used in Chromecast and for the system partitions of some Android releases (Android Nougat). It is often combined with a union mount filesystem, such as UnionFS, OverlayFS, or aufs, to provide a read-write environment for live Linux distributions. This takes advantage of both Squashfs's high-speed compression abilities and the ability to alter the distribution while running it from a live CD. Distributions such as Debian Live, Mandriva One, Puppy Linux, Salix Live and Slax use this combination. The AppImage project, which aims to create portable linux applications, uses squashfs for creating appimages. The Snappy package manager also uses squashfs for its ".snap file format". Squashfs is also used by Linux Terminal Server Project and Splashtop. The tools unsquashfs and mksquashfs have been ported to Windows NT – Windows 8.1. 7-Zip also supports Squashfs. History Squashfs was initially maintained as an out-of-tree Linux patch. The initial version 1.0 was released on 23 October 2002. In 2009 Squashfs was merged into Linux mainline as part of Linux 2.6.29. In that process, the backward-compatibility code for older formats was removed. Since then the Squashfs kernel space code has been maintained in the Linux mainline tree, while the user space tools remain on the project's GitHub page. The original version of Squashfs used gzip compression, although Linux kernel 2.6.34 added support for LZMA and LZO compression, Linux kernel 2.6.38 added support for LZMA2 compression (which is used by xz), Linux kernel 3.19 added support for LZ4 compression, and Linux kernel 4.14 added support for Zstandard compression. Linux kernel 2.6.35 added support for extended file attributes. See also AppImage Btrfs Cloop Comparison of file systems Cramfs e2compr EROFS Initramfs List of file systems References External links Compression file systems Free special-purpose file systems Read-only file systems supported by the Linux kernel
45286085
https://en.wikipedia.org/wiki/C3D%20Toolkit
C3D Toolkit
C3D Toolkit is a geometric modeling kit originally developed by ASCON Group, now by C3D Labs, using C++ and written in Visual Studio. C3D Toolkit responsible for constructing and editing geometric models. It can be licensed by other companies for use in their 3D computer graphics software products. The most widely known software in which C3D Toolkit is typically used are computer aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems. As the software development tool, C3D Toolkit performs 3D modeling, 3D constraint solving, polygonal mesh-to-B-rep conversion, 3D visualization, and 3D file conversions. It incorporates five modules: C3D Modeler constructs geometric models, generates flat projections of models, performs triangulations, calculates the inertial characteristics of models, and determines whether collisions occur between the elements of models; C3D Modeler for ODA enables advanced 3D modeling operations through the ODA's standard "OdDb3DSolid" API from the Open Design Alliance; C3D Solver makes connections between the elements of geometric models, and considers the geometric constraints of models being edited; C3D B-Shaper converts polygonal models to boundary representation (B-rep) bodies; C3D Vision controls the quality of rendering for 3D models using mathematical apparatus and software, and the workstation hardware; C3D Converter reads and writes geometric models in a variety of standard exchange formats. History Nikolai Golovanov is a graduate of the Mechanical Engineering department of Bauman Moscow State Technical University as a designer of space launch vehicles. Upon his graduation, he began with the Kolomna Engineering Design bureau, which at the time employed the future founders of ASCON, Alexander Golikov and Tatiana Yankina. While at the bureau, Dr Golovanov developed software for analyzing the strength and stability of shell structures. In 1989, Alexander Golikov and Tatiana Yankina left Kolomna to start up ASCON as a private company. Although they began with just an electronic drawing board, even then they were already conceiving the idea of three-dimensional parametric modeling. This radical concept eventually changed flat drawings into three-dimensional models. The ASCON founders shared their ideas with Nikolai Golovanov, and in 1996 he moved to take up his current position with ASCON. Today he continues to develop algorithms in C3D Toolkit. Functionality C3D Modeler Modeling 3D solids Performing Boolean operations Creating thin-walled solids Filleting and chamfering parts Modeling sheetmetal parts Designing with direct modeling Modeling 3D surfaces Modeling 3D wireframe objects Surface triangulation Performing geometric calculations Casting planar projections Creating section views Calculating mass inertia properties Collision detection C3D Converter Boundary representation (B-Rep): STEP incl. PMI (protocols AP203, AP214, AP242) Parasolid X_T, X_B (read v.29.0/write v.27.0) ACIS SAT (read v.22.0/write v.4.0, 7.0, 10.0) IGES (read v.5.3/write v.5.3) Polygonal representation: STL (read and write) VRML (read v.2.0/write v.2.0) Both representations: JT v.8.0 - 10.x incl. PMI and LOD (ISO 14306) The C3D file format is also used as CAD exchange format, and it is gaining popularity in the global area. C3D Vision Configures levels of detail (LOD) Applies shaders and widgets Uses 3D assembly feature tree managers Controls static graphics and dynamic scenes Sets anti-aliasing levels Culls invisible elements of scenes Speeds up visual computing through hardware acceleration Section planes Interactive 3D controls (manipulators) C3D Solver 2D constraint solver for 2D drawings and 3D sketches 3D constraint solver for assemblies and kinematic analyses The C3D Solver supports the following constraint types: Coincidence (available in 2D and 3D) Align points (2D) Angle (2D and 3D) Coaxiality (3D) Distance (2D and 3D) Equal lengths (2D) Equal radii (2D) Fix geometry (2D and 3D) Fix length and direction (2D) Incidence (2D) Parallelism (2D and 3D) Perpendicularity (2D and 3D) Radius (2D) Tangency (2D and 3D) C3D B-Shaper Controls surface recognition accuracy Segments polygonal meshes Edit segments Reconstructs segments in certain types of surfaces Generates B-rep models Features The development environment operates using these software: MS Visual Studio 2017 MS Visual Studio 2015 MS Visual Studio 2013 MS Visual Studio 2012 MS Visual Studio 2010 MS Visual Studio 2008 Clang (for Mac OS) GCC (for Linux) NDK (for Android) The supported programming languages include: C++ C# JavaScript Applications Since 2013 - the date the company started issuing a license for the toolkit -, several companies have adopted C3D software components for their products, users include: nanoCAD and nanoCAD Mechanica use the C3D Modeler, C3D Solver, and C3D Converter components KOMPAS-3D flexible 3D modeling system KOMPAS-Builder KOMPAS:24 for Android TECHTRAN uses C3D to import 3D models in various formats, view them, prepare blanks for turning CNCs from 3D models of future parts, and retrieve geometric data from 3D models. LEDAS Geometry Comparison (LGC) technology to compare 3D models and pinpoint all of the differences between them CAE system PASS/EQUIP for comprehensive structural pressure vessels analysis ESPRIT Extra CAD is based on C3D kernel Furniture Design CAD K3-Furniture Furniture Design CAD K3-Mebel Quick CADCAM Furniture Design CAD BAZIS System 3D AEC CAD software platform Renga Architecture Building information modeling system Renga Structure for structural design buildings and facilities Staircon application for the timber staircase industry SolidEng Dietech India develops software to configure mold bases for various die casting machines LOGOS software for simulation with supercomputers PRISMA (Russian analogue of MCNP) EE Boost Acoustic VR EE Boost Electromagnetics MKA Steel application for a single-story steel structure design Delta Design software for the automated design of electronic devices Altium Designer software package for printed circuit board, field-programmable gate array and embedded software design Quickfield finite element analysis software package ÇİZEN Die (manufacturing) Design Software from Mubitek Open BIM Systems from CYPE Software VR Concept Virtual reality application uses C3D Converter for reading imported CAD data, and C3D Modeler for constructing and editing 3D models Recently, C3D Modeler has been adapted to ODA Platform. In April 2017, C3D Viewer was launched for end users. The application allows to read 3D models in common formats and write it to the C3D file format. Free version is available. See also CAD standards Computer-aided technologies Computer-aided design Computer-aided manufacturing Computer-aided engineering Geometric modeling kernel Geometric modeling Solid modeling Boundary representation References External links Official website Graphics software 3D graphics software Computer-aided design Computer-aided design software Computer-aided engineering software CAD file formats C++ libraries 3D scenegraph APIs Application programming interfaces Software development kits Programming tools
16914324
https://en.wikipedia.org/wiki/Lehman%27s%20laws%20of%20software%20evolution
Lehman's laws of software evolution
In software engineering, the laws of software evolution refer to a series of laws that Lehman and Belady formulated starting in 1974 with respect to software evolution. The laws describe a balance between forces driving new developments on one hand, and forces that slow down progress on the other hand. Over the past decades the laws have been revised and extended several times. Context Observing that most software is subject to change in the course of its existence, the authors set out to determine laws that these changes will typically obey, or must obey in order for the software to survive. In his 1980 article, Lehman qualified the application of such laws by distinguishing between three categories of software: An S-program is written according to an exact specification of what that program can do A P-program is written to implement certain procedures that completely determine what the program can do (the example mentioned is a program to play chess) An E-program is written to perform some real-world activity; how it should behave is strongly linked to the environment in which it runs, and such a program needs to adapt to varying requirements and circumstances in that environment The laws are said to apply only to the last category of systems. The laws All told, eight laws were formulated: (1974) "Continuing Change" — an E-type system must be continually adapted or it becomes progressively less satisfactory. (1974) "Increasing Complexity" — as an E-type system evolves, its complexity increases unless work is done to maintain or reduce it. (1974) "Self Regulation" — E-type system evolution processes are self-regulating with the distribution of product and process measures close to normal. (1978) "Conservation of Organisational Stability (invariant work rate)" — the average effective global activity rate in an evolving E-type system is invariant over the product's lifetime. (1978) "Conservation of Familiarity" — as an E-type system evolves, all associated with it, developers, sales personnel and users, for example, must maintain mastery of its content and behaviour to achieve satisfactory evolution. Excessive growth diminishes that mastery. Hence the average incremental growth remains invariant as the system evolves. (1991) "Continuing Growth" — the functional content of an E-type system must be continually increased to maintain user satisfaction over its lifetime. (1996) "Declining Quality" — the quality of an E-type system will appear to be declining unless it is rigorously maintained and adapted to operational environment changes. (1996) "Feedback System" (first stated 1974, formalised as law 1996) — E-type evolution processes constitute multi-level, multi-loop, multi-agent feedback systems and must be treated as such to achieve significant improvement over any reasonable base. References Software maintenance
66902316
https://en.wikipedia.org/wiki/Alnwick%20Packet%20%281802%20ship%29
Alnwick Packet (1802 ship)
Alnwick Packet (or simply Alnwick) was a smack launched in 1802 in Berwick. She sailed as a coaster and between the United Kingdom and the Continent, and as far as Madeira. In 1809 the British Royal Navy hired her to participate in the ill-fated Walcheren Expedition. Afterwards she returned to her previous trades. She was wrecked on 9 November 1825. Career Alnwick first appeared in the Register of Shipping (RS) in 1805. On 15 February 1807 Alnwick Packet, Schotton, master was sailing from London to Alemouth (Alnmouth?) with a valuable cargo. She was in company with a brig from Sunderland. A French privateer with a sloop in company, believed to be her prize, approached. Alnwick Packet was armed with six 12-pounder carronades, courtesy of a government program of arming merchantmen to enable them to protect themselves from French privateers. Captain Shotton had his crew man the carronades, opened his gun ports, and ran his colours up the mast. The French privateer, seeing that Alnwick Packet was prepared to fight, sailed away. The Royal Navy hired Alnwick Packet on 1 July 1809. She was one of 15 small transports that were hired for the Walcharen Expedition. She is listed as a tender among the vessels stationed at Heligoland on 17 July under the command of Admiral Strachan, the naval commander of the expedition. The Navy returned her to her owners on 3 October. On 15 April 1812 Alnwick Packet was towed into Bridlington having lost her mast. She was coming from Alemouth. On 18 December 1817 Alnwick Packet, Adams, master, was driven ashore St. Margaret's Bay, Kent, but it was expected that she would be gotten off. She was on a voyage from Southampton to Newcastle. A later report stated that she had sustained considerable damage. Fate Alnwick Packet, Moore, master, foundered on 9 November 1825 in the North Sea off Runton, Norfolk. Her crew and a small part of the cargo were saved, and landed at Sheringham. She was on a voyage from London to Alnwick when she ran into a brig in the night. Notes Citations References 1802 ships Age of Sail merchant ships of England Packet (sea transport) Hired armed vessels of the Royal Navy Maritime incidents in 1817 Maritime incidents in November 1825
42136759
https://en.wikipedia.org/wiki/OpenEMIS
OpenEMIS
OpenEMIS, or Open Education Management Information System (EMIS), is an Education Management Information System for the education sector. Its main purpose is to collect, analyze, and report data related to the management of educational activities. Initially conceived by UNESCO, OpenEMIS serves as a response to Member States’ need in the area of tools for educational strategic planning. History OpenEMIS is an initiative launched by UNESCO. The initiative is in line with UNESCO's effort to promote EMIS to address the issues of access, equity, quality, and relevance in education and subsequent gaps between Member States in providing an adequate decision support system since early 2000s, in keeping with the Education for All goals adopted in Dakar in 2000. OpenEMIS was conceived to be a royalty-free software that can be customized to meet the specific needs of implementing countries, without conditions or restrictions for use, with the purpose of encouraging country-level capacity development and helping countries upgrade their local skills for managing the tool. The initiative is coordinated by UNESCO in partnership with a nonprofit organization (NPO), Community Systems Foundation (CSF) for the implementation and technical support since 2012. Architecture OpenEMIS is a suite of interrelated software solutions that supports data collection (Survey; Classroom; Staffroom), management (Core; School; Integrator), and analysis (Visualizer; Monitoring; Dashboard; Analyzer; DataManager; Profiles). Each application is designed to provide unique standalone features to support different information management requirements at different levels. These features could be customized and integrated with existing systems. Implementation OpenEMIS is being implemented in the following countries. In nearly all cases, the Ministry of Education is central to implementation, with support and engagement with a wide range of stakeholders: Malaysia (UNHCR Malaysia, 2012–present) Grenada (OECS, 2013–present) St. Vincent and the Grenadines (OECS, 2013–present) Belize (Ministry of Education, Youth, Sports & Culture of Belize, 2014–present) Jordan (UNESCO Jordan, 2014–present) Maldives (Ministry of Education of Maldives, 2015–present) Turks and Caicos Islands (Ministry of Education of Jordan, 2016–present) Kyrgyz Republic (Ministry of Education Kyrgyz Republic, 2019–present) Support Support is available to implementing countries with all aspects of country implementation from the technical support team. Support activities include services for: Policy and Planning; Analysis; Implementation; Custom Software Development; Training; and others as needed. System set up may involve cost for additional technical support for setup, configuration, migration, and capacity building. See also Datamining PHP Web 2.0 Sustainable Development Goals (SDG) Goal: Quality Education (SDG4) Targets References External links OpenEMIS Educational administration Educational software School-administration software Cloud applications Virtual learning environments Learning management systems Free software ERP software
1121290
https://en.wikipedia.org/wiki/AST%20Research
AST Research
AST Research, Inc., known as AST Computer, was a personal computer manufacturer, founded in Irvine, California, in 1980 by Albert Wong, Safi Qureshey and Thomas Yuen (the name comes from the initials of their first names: Albert Safi Thomas). In the 1980s, AST designed add-on expansion cards, and toward the 1990s evolved into a major personal computer manufacturer. AST was acquired by Samsung Electronics in 1997, but was forced to close in 1999 due to a series of losses. History Expansion cards AST's original business was the manufacture and marketing of a broad range of microcomputer expansion cards, later focusing on higher-density replacements for the standard I/O cards in the IBM PC. A typical AST multifunction card of the mid-1980s would have an RS-232 serial port, a parallel printer port, a battery-backed clock/calendar (the original IBM PC did not have one), a game port, and 384 KB of DRAM (added to the 256 KB on the motherboard to reach the full complement of 640 KB) - marketed under the product name 'SixPakPlus'. A similar expansion card was produced for the 8-bit Apple II, named the AST Multi I/O, which offered a serial and parallel interface, plus a battery-backed clock/calendar. In 1987 AST produced a pair of expansions cards for the Apple IIGS computer: The RamStakPlus, a dual RAM/ROM memory expansion card; and the AST Vision Plus, a real-time video digitizer card. The latter card was eventually sold to Silicon & Software and licensed and sold through Virtual Realities (and later LRO and then Alltech Electronics). AST Research also produced for the Macintosh line the Mac286, a pair of NuBus cards containing an Intel 80286 and RAM, allowing a Macintosh to run MS-DOS side by side with its existing operating system. These cards were announced in March 1987 alongside Apple's Macintosh II line. The product line was eventually sold to Orange Micro, which developed the concept further. Personal computers As PC manufacturers improved the integration of peripheral controllers on their motherboards, AST's original business began to shrink, and the company developed its own line of PCs, for the desktop, mobile, and server markets. AST was one of the members of the Gang of Nine which developed the EISA bus. In 1992 AST became a Fortune 500 company at place 431. AST computer's reliability was considered close to that of quality leaders Compaq, Gateway, and IBM. AST gained a decent share of the PC market, but never came close to overtaking Compaq and Dell. During 1992–1995, AST owned the largest market share in China with Legend (now Lenovo) as the largest local reseller of AST computer. In 1993 Radio Shack sold its computer manufacturing division to AST, and in 1994 they reached a deal to sell AST computers in Radio Shack stores. A year later, the electronics chain started selling IBM-brand computers instead. Decline AST's fortunes shrunk due to the strategy of offering premium models in an increasingly competitive personal computer market, while Compaq Computer Corporation and other top-tier manufacturers slashed prices to go head-to-head with the cheapest clones. The failure of AST to recognize the movement towards the commoditization of the PC contributed to its downturn. AST insisted on developing and using its own components in the PCs it produced, instead of those of specialized OEMs. One often used saying at AST, in an attempt to dismiss such competitors, was "the best technology they have is a screwdriver." In 1990, AST Research Inc. announced its entrance into the Japanese computer market with a clone of a very popular Japanese computer. Japan's Kawasaki Steel Corp. and Tomcat Computer Corp. were involved in the development of AST Research Inc.’s new computer, Dual SX-16. In 1994, AST Research Inc. announced its agreement to sell assets of its Northern California software units to Telxon Corp. By the mid-1990s, AST had severe problems in the highly competitive PC market. Revenues for 1996 were $2.104 billion, down from 1995 revenues of $2.348 billion. AST Research was acquired by Samsung in 1997. At the time, Samsung owned 46 percent of AST and had offer to buy remaining common shares. Prior to this move, Samsung had already owned a substantial stake and provided considerable financial support to keep AST going. By December, the number of employees was down to 1,900. In 1999, Samsung was forced to close the California-based computer maker after a string of losses and a mass defection of research talent. Samsung had invested one billion US dollars in the company. The AST name was sold to Packard Bell. AST continued to provide support for Tandy and AST computers until December 31, 2001. AST sponsored the English football club Aston Villa from 1995 to 1998. AST Computers, LLC In January 1999, the name and intellectual property were acquired by a new company named AST Computers, LLC. AST Computers, LLC was a private company founded in 1999 when Beny Alagem, founder of Packard Bell Electronics, bought the name and intellectual property of AST Research, Inc. AST Computers disappeared from the market in 2001. As of early 2011, the dormant AST trademark appears to be being relaunched by a new, independent company named DATA ACCESS based in France. References External links American companies established in 1980 American companies disestablished in 1999 Companies based in Irvine, California Computer companies established in 1980 Computer companies disestablished in 1999 Defunct companies based in California Defunct computer companies of the United States Defunct computer hardware companies
2919731
https://en.wikipedia.org/wiki/FlatOut%202
FlatOut 2
FlatOut 2 is an action racing video game developed by Bugbear Entertainment and published by Empire Interactive in Europe and Vivendi Universal Games in North America. It is the sequel to the 2004 game FlatOut. This game is themed more on the street racing/import tuner scene than its predecessor. A notable change is the tire grip; players can take more control of their car, worrying less about skidding in tight turns. The game has three car classes: derby, race, and street. It was released in Russia on June 29, 2006, in Europe on June 30, and in North America on August 1. In 2008, an OS X version of the game was released by Virtual Programming. In 2014, a Linux version of the game was released on GOG.com as part of the launch of Linux support. An enhanced port was released in 2007 for the Xbox 360 and Windows as FlatOut: Ultimate Carnage. A PlayStation Portable port of Ultimate Carnage was released as FlatOut: Head On. Gameplay Ragdoll physics The ragdoll physics in the sequel have been greatly updated. During the race, the driver may be thrown out of the car if slammed into a wall at a high speed. In the numerous stunt minigames the goal is to shoot themselves out of the car and complete objectives like knocking down a set of bowling pins, hitting the designated spots on a dartboard, score a field goal or fly through flaming hoops. Players must use 'aerobatics' to control the driver in-flight, but overusing it will increase drag, which will slow the driver down and possibly prevent him/her from reaching the designated target. If the driver falls short of the target, players can use the "nudge". This gives the driver a small upward boost and slightly reduces drag. In the Stone Skipping Stunt minigame, the players must use nudge just as the driver hits the surface of the water to skip the most efficiently and reach the furthest. Reception Critical reception The PC version of FlatOut 2 received "generally favorable reviews", while the PlayStation 2 and Xbox versions received "average" reviews, according to the review aggregation website Metacritic. In Japan, Famitsu gave the PS2 version a score of all four sevens for a total of 28 out of 40. Awards Won IGN's award for Best PlayStation 2 Racing Game of 2006. Won X-Plays award for Best Racing Game of 2006. FlatOut: Ultimate Carnage FlatOut: Ultimate Carnage is an enhanced port of FlatOut 2 featuring new gameplay modes and graphics as well as at least two new cars. It was known earlier as FlatOut: Total Carnage. FlatOut: Ultimate Carnage was released on July 22, 2007 in Europe, on August 1 in Australia, and on October 2 in North America for the Xbox 360. The Microsoft Windows version was released through the Steam network on August 26, 2008, and in stores on September 2. There is also a handheld version of the game for the PlayStation Portable called FlatOut: Head On, which was released in Australia on March 12, 2008, in Europe two days later, and in North America on April 4. Gameplay Ultimate Carnage introduces a brand new series of tracks which are based anywhere from busy streets to storm water drains. The cars are more detailed than previous games in the series, employing the latest in dynamic lighting and shadow technology, and a greatly enhanced damage and physics engine where each car is made of up to 40 separate destructible parts. The single player game supports up to 11 other AI-controlled cars in each race. A new multiplayer format is also included; this runs on the Games for Windows - Live system which requires the user to either sign into own Xbox LIVE or Games for Windows LIVE Gamertag, or sign up for one for free. The LAN function is not available in FlatOut: Ultimate Carnage, unlike the previous two FlatOut games for Windows. Reception The Xbox 360 and PC versions received "favorable" reviews, while the Head On version received "average" reviews, according to the review aggregation website Metacritic. Hyper'''s Maurice Branscombe commended the Xbox 360 version for "looking and playing better than ever before", but did not like the soundtracks and stated that "the game's load times are too long". In Japan, Famitsu'' gave the same console version a score of three sevens and one six for a total of 27 out of 40. References External links 2006 video games Bugbear Entertainment games 02 Games for Windows certified games Linux games Lua (programming language)-scripted video games MacOS games Multiplayer and single-player video games PlayStation 2 games PlayStation Portable games Vehicular combat games Video game sequels Video games developed in Finland Video games featuring protagonists of selectable gender Windows games Xbox games Xbox 360 games
27262148
https://en.wikipedia.org/wiki/Bengali%20input%20methods
Bengali input methods
Bengali input methods refer to different systems developed to type Bengali language characters using a typewriter or a computer keyboard. Fixed computer layouts With the advent of graphical user interfaces and word processing in the 1980s, a number of computer typing systems for Bengali were created. Most of these were originally based on Apple Macintosh systems. Shahidlipi Shahidlipi was the first Bengali keyboard developed for the computer by Saifuddahar Shahid in 1985. It was a phonetic based layout on QWERTY for Macintosh computer. This keyboard was popular until the release of Bijoy keyboard. There were about 182 characters and half part of conjunct characters under Normal, Shift, AltGr, and Shift AltGr layer. Munier keyboard Munier keyboard layout comes from a Bengali typewriter layout named Munier-Optima. In 1965, Munier Choudhury redesigned the keyboard of Bengali typewriter in collaboration with Remington typewriters of the then East Germany. Munier-Optima was the most used typewriter in Bangladesh. So, many software developers implemented this layout on their keyboard. This layout is optimized for Unicode by Ekushey. Unijoy keyboard UniJoy keyboard is developed by Ekushey. It is also added to m17n database. Bangla Jatiyo Bangla Jatiyo Keyboard (National, ) Keyboard layout developed by Bangladesh Computer Council, is currently the most popular layout and is addressed as the standard layout. Bengali Inscript This keyboard layout is designed in order to type all the indic scripts with a uniform layout on computer. This layout is officially accepted by Microsoft Corporation and is provided by default in their Windows operating system. It is also available on macOS, alongside Bengali-Qwerty. This layout is mainly popular in India. Probhat Probhat () is a free Unicode-based Bengali fixed layout. Probhat is included in almost all Linux OS(s). Its key mapping is similar to Phonetic pattern but typing method is fully Fixed. Bijoy Bijoy keyboard layout is a proprietary layout of Mustafa Jabbar. It is licensed under the Bangladesh Copyright Act 2005. Bijoy keyboard, with related software and fonts, was first published in December 1998 for Macintosh computer. Windows version of Bijoy Keyboard was first published in March 1993. The first version of Bijoy software was developed in India (possibly by an Indian programmer). Subsequent versions were developed in Bangladesh by Ananda Computers' team of developers including Munirul Abedin Pappana who worked for Bijoy 5.0, popularly known as Bijoy 2000. Version 3.0 is the latest version of Bijoy layout. Bijoy keyboard was most widely used in Bangladesh until the release of Unicode-based Avro Keyboard. It has a different AltGr character and vowel sign input system with its software than the Unicode Standard. This ASCII-Unicode based Bengali input software and requires the purchase of a license to use on every computer. Baishakhi Baishakhi keyboard is developed by Society for Natural Language Technology Research (SNLTR). It is mainly used in Indian governmental work. This layout is available in most common Linux Distribution OS. Uni Gitanjali Gitanjali Keyboard is customized for Unicode compliant to Uni Gitanjali Keyboard by Society for Natural Language Technology Research (SNLTR). It is mainly used in Indian governmental work. This layout is available in most common Linux Distribution OS. Disha Disha keyboard is based on Probhat layout and created by Sayak Sarkar. This layout is available the m17n database as proposed by Ankur Group. This keyboard aims to create a visual typing method for Bengali. Phonetic computer layouts Akkhor Akkhor () pronounced ôkkhôr Bangla Software, developed by Khan Md. Anwarus Salam, was first released on 1 January 2003 for free. The Unicode/ANSI-based Akkhor Keyboard is compatible with fixed keyboard layouts, including the Bijoy keyboard. Akkhor also provides a customization feature for designing fixed keyboard layouts. It provides a Keyboard Manager which works system wide and also provides an independent Akkhor Word processor. Avro Avro Keyboard (), developed by Mehdi Hasan Khan, was first released on 26 March 2003 for free. It facilitates both fixed and phonetic layouts. Avro phonetic allows a user to write Bengali by typing the phonetic formation of the words in English language keyboards. Avro is available as a native IME on Microsoft Windows, macOS and Linux distributions. It was built-in Bengali IM in Firefox OS. Bakkhor Bakkhor (portmanteau of বাংলায় সাক্ষর, meaning Bengali literacy) Developed by Ensel Software and available online. It is an open-source JavaScript based. It allows some letters to be typed in multiple ways in order to type using lower case letters only in mobile devices. Google Bengali Transliteration There is a free transliteration web site and software package for Bengali scripts from google. Microsoft Bengali transliteration Along with other Indic languages, Microsoft has web based and desktop transliteration support for Bengali. Bangla Onkur Bangla Onkur () pronounced onkur, developed by S. M. Raiyan Kabir, was first released on 30 March 2011 as an open-source software. It facilitates only phonetic typing in Macintosh platform. Bangla Onkur phonetic allows a user to write Bengali by typing the phonetic formation of the words in English language keyboards. This is the first phonetic input method developed for Mac OS X. Saon Bengali This is an m17n library which provides the Saon () Bengali input method for touch typing in Bengali on Linux systems and the project was registered by its creator, Saoni at SourceForge on 8 July 2012. This free and open source IM is Unicode 6.1 compliant in terms of both normalization and number of keystrokes used to input a single character. Saon Bengali enables touch typing so if a user can type in English, they won't have to look at the keyboard to type in Saon Bengali. It is also phonetic and has something in common with all Bengali phonetic layouts making the transition smooth for new users. As of Jul 2012 it not yet a part of the m17n-contrib which allows installation of all m17n contribbed libraries through Linux's software channels and it may be too early to say whether it will be incorporated. This depends firstly on its author and then if it is offered to m17n then probably on m17n. The m17n IM engine currently works with IBus inter alia on Linux. The copyright notice on Saon says, "You can redistribute this and/or modify it under the GNU LGPL 2.1 or later" Open Bangla Keyboard Open Bangla Keyboard is An open source, Unicode compliant, Bangla input method for GNU/Linux systems. It’s a full-fledged Bangla input method with many famous typing methods and typing automation tools. OpenBangla Keyboard comes with the popular Avro Phonetic which is the de-facto phonetic transliteration method for writing Bangla. It also includes multiple fixed keyboard layouts such as Probhat, Munir Optima, National (Jatiya) etc. which are very popular amongst the professional writers. Most features of Avro Keyboard are present in OpenBangla Keyboard. So Avro Keyboard users will feel right at home in Linux with OpenBangla Keyboard. Borno Borno (Bengali: বর্ণ) is a free Bengali input method editor developed by Jayed Ahsan. Borno is compatible with the latest version of Unicode and all versions of Windows OS. It was first released on 9 November 2018. Borno supports both fixed and phonetic keyboard layouts. It supports both ANSI and Unicode. Mobile phone layouts There is also software for users for typing Bengali on mobile phones and smartphones. Ridmik Keyboard Ridmik Keyboard (), is the input system available for Andoid and iOS. Users can type in Bengali with Avro Phonetic (), Probhat (), National () and as well as English layouts. It also comes with many Emojis and background themes and have handy shortcuts and speech-to-text support using Google STT backend. OpenBoard OpenBoard is a free and open source keyboard based on AOSP which includes Bengali Layouts. It comes with three Bengali Fixed Layouts including Akkhor Layout. OpenBoard is a privacy focused keyboard which does not contain shortcuts to any Google apps, has no communication with Google servers. It supports Auto Correction, Word Suggestion for Bengali Language. Indic Keyboard Indic Keyboard is a free and open-source keyboard software for Android developed by Indic Project, available under Apache License. It supports common Bengali layouts, namely Probhat, Avro and Inscript. Bijoy Bijoy Keyboard or Bijoy Bangla () is a mobile keyboard for Android and iOS. But in 2015 they released it again and name it Bijoy Bangla only for Android. Bijoy Bangla is for writing Bangla in Unicode System with Bijoy Keyboard. It use the Bijoy layout which is almost same as Jatiyo layout. Users can type in Bengali and English using this keyboard. Parboti In Parboti Keyboard () Users can type in Bengali and English using this keyboard. Also users can edit fixed layout by their own choice. Mayabi Mayabi Bangla Keyboard () is an on-screen Bengali soft keyboard for Android platform. Bengali word dictionary included with the keyboard as well for word prediction. Google Indic Keyboard Google Indic Keyboard is an Android keyboard that supports several Indic languages, including Bengali. It offers a handwriting input method and a Latin letter transliteration layout, as well as a traditional Bengali keyboard. Borno Keyboard Borno () is the first open-source Bangla input method editor for Android, maintained and developed by Jayed Ahsan. It's licensed under GPL 3.0. It also comes with both phonetic and fixed keyboard layouts, an option for adding PC QWERTY layout is also present. It also features horizontal emojis. Borno is the first glide typing supported Bangla keyboard of Bangladesh. It's still under development. See also Japanese input methods References External links Bengali language Input methods Indic computing
1449194
https://en.wikipedia.org/wiki/Common%20Address%20Redundancy%20Protocol
Common Address Redundancy Protocol
The Common Address Redundancy Protocol or CARP is a computer networking protocol which allows multiple hosts on the same local area network to share a set of IP addresses. Its primary purpose is to provide failover redundancy, especially when used with firewalls and routers. In some configurations, CARP can also provide load balancing functionality. CARP provides functionality similar to VRRP and to Cisco Systems' HSRP. It is implemented in several BSD-based operating systems and has been ported to Linux (ucarp). Example If there is a single computer running a packet filter, and it goes down, the networks on either side of the packet filter can no longer communicate with each other, or they communicate without any packet filtering. If, however, there are two computers running a packet filter, running CARP, then if one fails, the other will take over, and computers on either side of the packet filter will not be aware of the failure, so operation will continue as normal. In order to make sure the new Active/Primary operates the same as the old one, the packet filter used must support synchronization of state between the two computers. Principle of redundancy A group of hosts using CARP is called a "group of redundancy". The group of redundancy allocates itself an IP address which is shared or divided among the members of the group. Within this group, a host is designated as "Active/Primary". The other members are "Standby". The main host is that which "takes" the IP address. It answers any traffic or ARP request brought to the attention of this address. Each host can belong to several groups of redundancy. Each host must have a second unique IP address. A common use of CARP is the creation of a group of redundant firewalls. The virtual IP address allotted to the group of redundancy is indicated as the address of the default router on the computers behind this group of firewalls. If the main firewall breaks down or is disconnected from the network, the virtual IP address will be taken by one of the firewall slaves and the service availability will not be interrupted. History In the late 1990s the Internet Engineering Task Force (IETF) began work on a protocol for router redundancy. In 1997, Cisco informed the IETF that it had patents in this area and, in 1998, pointed out its patent on HSRP (Hot Standby Router Protocol). Nonetheless, IETF continued work on VRRP (Virtual Router Redundancy Protocol). After some debate, the IETF VRRP working group decided to approve the standard, despite its reliance on patented techniques, as long as Cisco made the patent available to third parties under RAND (Reasonable and Non-Discriminatory) licensing terms. Because VRRP fixed problems with the HSRP protocol, Cisco began using VRRP instead, while still claiming it as its own. Cisco informed the OpenBSD developers that it would enforce its patent on HSRP. Cisco's position may have been due to their lawsuit with Alcatel. As Cisco's licensing terms prevented an open-source VRRP implementation, the OpenBSD developers begun developing CARP instead. OpenBSD focuses on security. They designed CARP to use cryptography. This made CARP fundamentally different from VRRP and ensured that CARP didn't infringe on Cisco's patent. CARP became available in October 2003. Later, it was integrated into FreeBSD (first released in May 2005 with FreeBSD 5.4), NetBSD and Linux (ucarp). While Cisco's US patent expired in 2014, the two incompatible protocols continue to coexist. Incompatibility with IETF standards OpenBSD uses VRRP's protocol number and MAC addresses. The OpenBSD project requested unique numbers from IANA but was denied. To allocate numbers, IANA has several requirements. At the time, these were specified in RFC 2780. Requirements include participating in a collaborative, lengthy discussion process within the IETF and producing a detailed textual specification of the protocol. The OpenBSD developers met neither requirement. OpenBSD's website states the following: IANA had assigned protocol number 112 to VRRP (in 1998, via RFC 2338). Protocol number 112 remains in use by VRRP. CARP also uses a range of Ethernet MAC addresses which IEEE had assigned to IANA/IETF for the VRRP protocol. In spite of the overlap, it is still possible to use VRRP and CARP in the same broadcast domain, as long as the VRRP group ID and the CARP virtual host ID are different. See also Gateway Load Balancing Protocol (GLBP) HSRP pfsync VRRP IP network multipathing (IPMP) References External links UCARP: userland CARP implementation NetBSD port of CARP The OpenBSD song 3.5: "CARP License" and "Redundancy must be free" High-availability cluster computing OpenBSD FreeBSD First-hop redundancy protocols
33135469
https://en.wikipedia.org/wiki/IdeaPad%20Y%20series
IdeaPad Y series
The IdeaPad Y series was a consumer range of laptops produced by Lenovo, first announced in 2008. They are marketed as premium high performance laptops for multimedia and gaming, as part of the IdeaPad line. The most significant differences from Lenovo's traditional ThinkPad business laptops were a more consumer-oriented appearance and performance-oriented components. IdeaPads feature a chiclet keyboard with rounded keys, similar to the latest ThinkPads. The first of the Y series were the IdeaPad Y710 and the IdeaPad Y510 notebooks, with screen sizes of 17 inches and 15 inches respectively. Not all features were entirely new, however. Notebook Review reported that the Y710 and Y510 notebooks had a keyboard that felt similar to the ThinkPad when used, despite the absence of the TrackPoint. The Y50 and Y40, released in 2014, featured a gaming-oriented design shift and slimming down. The latest release was the Y700 in late 2015. The IdeaPad Y series has since been replaced by the Legion Y series. 2016 Y900 Lenovo announced the IdeaPad Y900 in January 2016. It uses Intel Core Gen6 i7 processors that can be overclocked (Lenovo has included utility software to make this easier for users). The chassis is black aluminum with color accents. The keyboard is mechanical. Customizable color LEDs help the touchpad and various parts of the keyboard stand out more clearly in dark environments. The display is 17.3inches, uses an IPS panel with an anti-glare coating, and resolution is pixels. Up to 64GiB of RAM is supported. Bays are included for two SSDs or hard drives with RAID 0 support. An Nvidia GeForce GTX 980M comes standard with options for either 4GiB or 8GiB of VRAM. Y700 The Lenovo IdeaPad Y700 series was a class of gaming PCs. The IdeaPad Y700 series are respectively an 14-inch, 15-inch and 17-inch laptops designed specifically. Same as the IdeaPad 300, 110 and 330 series of home and office laptops, the IdeaPad Y700 series of gaming laptops along with Acer's Predator and Dell's Inspiron and G series gaming laptops. These model have three case versions with 14", 15" and 17" screens; they successor is a Legion Y720 model with similar cases. 2015 Y70 The Y70 is a gaming laptop with a 17-inch multitouch screen. As of February 2015, the Y70 base model had a 2.5GHz Intel Core i5 processor, 8GiB of RAM, a 1TB SSD/HDD hybrid, and a 2GiB Nvidia GTX 860M GPU. The display has 1080p resolution and LED backlighting. The Y70 scored 4.5 hours of battery life on MobileMark's Office Productivity Test but is only able to achieve a battery life of about 2.5 hours for gaming. A review from Notebook Review said, "We're happy to recognize the Lenovo IdeaPad Y70 Touch with our Editor's Choice award for being a great large screen entertainment notebook." 2014 There were two flagship laptops released in 2014 Lenovo Ideapad Y40, Y50 and Y70 Touch. Y40 and Y50 The Y40 and Y50 are respectively 14-inch and 15-inch laptops designed specifically for gaming. 1080p displays come standard on both models, but the Y50 has an option for a 4K display with a resolution of . Both come with options for multi-touch displays. Both have Intel Core i7 processors. The Y40 uses an AMD Radeon R9 M270 video card with 2 GiB of VRAM; The Y50 uses an Nvidia GeForce GTX 860M video card with options for 2 GiB and 4 GiB of VRAM. Later models now use the Nvidia GeForce GTX 960M video card. They come standard with 8 GiB of RAM (expandable up to 16 GiB). The Y40 comes standard with a 256 GB SSD and the Y50 comes standard with a 1 TB hybrid drive. Both are only thick. They respectively weigh and . Neither has an optical drive. The Y40 and Y50 were announced at the 2014 International CES in Las Vegas and went on sale in the United States in May of the same year. In a review for PC World, Hayden Dingman wrote, "In terms of gaming performance, Lenovo's Y50 is one of the best laptops in its class. It's a great choice if you're looking for a portable gaming rig on a budget. Unfortunately, Lenovo compromised several key components—the keyboard, trackpad, and (most importantly) the display—in order to offer the Y50 at a mid-range price. Hook up a mouse, keyboard, and external display and you'll have a solid gaming machine. If you can't tolerate those compromises, you might have to bite the bullet and spend more money for a competitor's offering." In a review for LAPTOP, Sherri L. Smith wrote, "Lenovo continues to impress me with its ability to offer gaming laptops at affordable prices. For $949, shoppers get a sleek-looking 14-inch notebook with solid overall performance and long battery life. However, while the AMD Radeon R9 M275 GPU isn't a lightweight, you won't be fragging or questing at the maximum settings. I also wish the notebook featured a better display and keyboard." Y40 Lenovo Ideapad Y40 was announced in the US on January 5, 2014. Processor: Up to 5th Gen Intel Core i7 Memory: Up to 16GiB DDR3L Graphics: Up to AMD Radeon M275 Storage: Up to 1TB HDD or 1TB + 8GB hybrid SSHD Battery: Up to 5 hours' battery life Display: 14.1" LED-backlit TN LCD Operating system: Microsoft Windows 8.1 Weight: Starting at Wireless: Bluetooth 4.02, IEEE 802.11a/b/g/n or 802.11ac Wi-Fi Ports: 2 × USB 3.0, 1 × USB 2.0, Audio Combo Jack (headphone and mic), HDMI-out, 4-in-1 (SD / MMC / SDXC / SDHC) card reader, RJ45 gigabit ethernet, SPDIF Y50 Lenovo IdeaPad Y50 was released in the second quarter of 2014. Processor: 4th Gen Intel Core i7-4710HQ (2.5GHz 1600MT/s 6MiB) Memory: up to 16GiB PC3-12800 DDR3L SDRAM 1600MT/s Storage: 256 or 512GB SSD, or 500GB/1TB 5400RPM + 8GB hybrid SSHD; Optical drive: External BD/DVD Graphics: NVIDIA GeForce GTX 860M (2GiB or 4GiB GDDR5) Display: 15.6" LED-backlit LCD (, , or multitouch) Operating system: Microsoft Windows 8.1 Weight: Keyboard: Backlit AccuType keyboard Media: 720p Camera, JBL 2.1 speakers with Dolby Advanced Audio V2 Battery: Up to 5 hours Wi-Fi browsing depending on configuration Wireless: Bluetooth 4.02, IEEE 802.11a/b/g/n or 802.11ac Wi-Fi Ports: 2 × USB 3.0, 1 × USB 2.0, Audio Combo Jack (headphone and mic), HDMI-out, 4-in-1 (SD / MMC / SDXC / SDHC) card reader, RJ45 gigabit ethernet, S/PDIF In 2015, some components were updated with more recent or parts of higher quality: IPS LCD panel display instead of TN LCD panel Intel Core i7-4720HQ instead of i7-4710HQ NVIDIA GeForce GTX 960M instead of GTX 860M 2013 Y400 Y410p The IdeaPad Y410p was released around June 2013. This laptop also features a fourth generation Haswell Intel Core i7 processor. The Y410p is comparable to higher end laptops such as the Alienware M14x, but this series starts at a comparatively lower price of $799. The Y410p also comes with an UltraBay, which can house a second dedicated graphics card, a hard drive or an exhaust fan; and uses the secure boot UEFI protocol. Specifications: Processor: up to 4th generation Intel Core i7-4700MQ (2.4GHz Quad) Memory: up to 8GiB (DDR3 1600MT/s) Graphics: Intel HD 4600 + NVIDIA GeForce GT755M 2GiB GDDR5 soldered graphics (and optional UltraBay GPU) Operating system: Microsoft Windows 8.1 Display: 14.1" or LED-backlit TN LCD Y500 The IdeaPad Y500 was released in the first week of January 2013, after Lenovo announced it in late 2012. The Y500 is a Modular laptop, where the BD/DVD drive could be switched out for adding another Graphics card, another Hard Drive, or another exhaust fan with new feature called Always-On USB,a port which will ensure that even when your system is switched off and unplugged from the mains, you will be able to charge your mobile phone or any other compatible USB device. Y500 Specifications: Processor: 3rd Generation Intel Core i7-3630QM (2.4GHz Quad) Memory: up to 16GiB (DDR3 1600MT/s) Graphics: Intel HD 4000 + NVIDIA GeForce GT 650M (2GiB GDDR5) soldered Operating system: Microsoft Windows 8 Display: 15.6" LED-backlit TN LCD Audio: Premium JBL speakers with Dolby Home Theater v4 sound enhancement A new version of the Y500 with upgraded features was released in June along with the Lenovo Y410p. The upgraded version has following features compare to older version Display: TN ( was present in older version) Graphics: Intel HD 4600 + NVIDIA GeForce GT 750M (NVIDIA GeForce GT 650M was in older version) Memory: Up to 8GiB DDR3 (16GiB in older version) Y510p The IdeaPad Y510p was released around June 2013. This laptop features a fourth generation Haswell Intel Core i7 processor. The Y510p also comes with an ultrabay, which can house a second dedicated graphics card, a hard drive or an exhaust fan; and uses the secure boot UEFI protocol. Specifications: Processor: 4th Generation Intel Core i7-4700MQ (Quad Core 2.4GHz) Memory: Up to 16GiB (DDR3 1600MT/s) Graphics: NVIDIA GeForce GT 755M (2GiB GDDR5) GPU, and optional secondary NVIDIA GeForce GT 750M (update GeForce GT 755M) as a UltraBay graphics card Display: 15.6" LED-backlit TN LCD Audio: JBL designed speakers supporting Dolby Home Theatre v4 Ports: 1 × USB 2.0 (always-on), 2 × USB 3.0, 6-in-1 card reader (SD, SDHC, SDXC, MMC, MS, MS-Pro), headphone, microphone, HDMI, VGA. Operating system: Microsoft Windows 8.0 (can be upgraded to 8.1) 2012 The IdeaPad Y-series laptops released by Lenovo in mid-2012 were the Y480 and Y580. Lenovo followed them up towards the end of the year with the Y400 and the Y500 which had almost similar specifications. The main difference is that the Y400 and Y500 have an ultrabay slot which can be swapped for another hard drive, another fan or another GPU which will work in SLI with the already integrated one to increase performance drastically. Y480 The Y480 was released in 2012 with the following specifications: Processor: Intel 3rd Generation Core i5/i7 i7-3630QM (Quad-core 2.4GHz, 6MiB L3 cache) i7-3610QM (Quad-core 2.3GHz, 6MiB L3 cache) i5-3210M (Dual-core 2.4GHz, 6MiB L3 cache) Memory: 8GB DDR3 1600MT/s Graphics: NVIDIA GeForce GT 640M LE (96 Fermi cores, 2GiB GDDR5 VRAM) NVIDIA GeForce GT 650M (384 Kepler cores, 2GiB GDDR5 VRAM) Display: 14.0" (169) LED-backlit TN LCD Storage: 1 × 2.5" SATA drive bay: 5400RPM HDD (500, 750GB, or 1TB) 1 × mSATA with 32GB SSD Dimensions: 345 × 239 × 20–32.8 mm (13.58 × 9.4 × 0.8–1.3 in) Weight: Operating system: Microsoft Windows 7 Home Premium Y580 The Y580 was released in late 2012 with the following specifications: Processor: Intel 3rd Generation Core i5/i7 i7-3630QM (Quad-core 2.4GHz, 6MiB L3 cache) i7-3610QM (Quad-core 2.3GHz, 6MiB L3 cache) i5-3210M (Dual-core 2.4GHz, 6MiB L3 cache) Memory: 8 or 16GB DDR3 1600MT/s (2 slots) Graphics: NVIDIA GeForce GTX 660M (384 Kepler cores, 2GiB GDDR5 VRAM) Display: 15.6" 169 LED-backlit TN LCD, or Storage: 1 × 2.5" SATA drive bay: 5400RPM HDD (500, 750GB, or 1TB) or 7200RPM HDD (500GB) 1 × mSATA with 32GB SSD Dimensions: 385 × 255 × 35.7 mm (15.16 × 10 × 1.4 in) Weight: Operating system: Microsoft Windows 7 (Home Premium) or Windows 8 2011 The IdeaPad Y-series laptops released by Lenovo in 2011 were the Y470 and Y570. Y470 The Y470 was released in 2011 with the following specifications: Processor: Intel 2nd Generation Core i3/i5/i7 Quad-core: i7-2720QM (2.2GHz, 6MiB L3 cache) i7-2630QM (2.0GHz, 6MiB L3 cache) Dual-core: i7-2620M (2.7GHz, 4MiB L3 cache) i5-2540M (2.6GHz, 3MiB L3 cache) i5-2520M (2.5GHz, 3MiB L3 cache) i5-2410M (2.3GHz, 3MiB L3 cache) i3-2310M (2.1GHz, 3MiB L3 cache) RAM: Up to 8GiB DDR3 (1066/1333MT/s) Display: 14" (169) TN LCD Graphics: Intel HD 3000 + NVIDIA GeForce 550M (1 or 2GiB VRAM) Storage: 1 × 2.5" SATA drive bay: 5400RPM HDD (250, 320, 500, 640, 750GB, or 1TB) 7200RPM HDD (320, 500, 750GB, or 1TB) 1 × mSATA socket: 32 or 64GB mSATA SSD Weight: Dimensions: 345 × 239 × 20–32.8 mm (13.58 × 9.4 × 0.8–1.3 in) Operating system: Microsoft Windows 7 (Professional or Home Premium) Y570 The Y570 was released in 2011 with the following specifications: Processor: Intel 2nd Generation Core i3/i5/i7 Quad-core: i7-2920XM (2.5GHz, 8MiB L3 cache) i7-2820QM (2.3GHz, 8MiB L3 cache) i7-2720QM (2.2GHz, 6MiB L3 cache) i7-2630QM (2.0GHz, 6MiB L3 cache) Dual-core: i7-2620M (2.7GHz, 4MiB L3 cache) i5-2540M (2.6GHz, 3MiB L3 cache) i5-2520M (2.5GHz, 3MiB L3 cache) i5-2430M (2.4GHz, 3MiB L3 cache) i5-2410M (2.3GHz, 3MiB L3 cache) i3-2310M (2.1GHz, 3MiB L3 cache) Memory: Up to 8GB DDR3 (1066/1333MT/s) Graphics: Intel HD 3000 + NVIDIA GeForce 555M (1GiB or 2GiB VRAM) Display: 15.6" (169) TN LCD Storage: 1 × 2.5" SATA drive bay 5400RPM HDD (250, 320, 500, 640, 750GB, or 1TB) 7200RPM HDD (320, 500, 750GB, or 1TB) 1 × mSATA socket 32 or 64GB mSATA SSD Weight: Dimensions: 385 × 255 × 22–35.7 mm (15.2 × 10 × 0.87–1.4 in) Operating system: Microsoft Windows 7 (Professional, or Home Premium, or Home Basic) 2010 The IdeaPad Y-series laptops released in 2010 by Lenovo were the Y460, Y460p, Y730, Y560p, and Y560d. Y460 and Y460p Notebook Review noted that the Y460 offered "great gaming performance", although the system heated up considerably while gaming. The battery life and design were also praised, with the reviewer stating that there was a "huge improvement in the looks department". LAPTOP Magazine offered a similar opinion, stating that, "Lenovo delivers multimedia and gaming power in a portable design, complete with a one-of-a-kind navigation control". Y560d, Y560 and Y560p Y730 The Y730 laptop was released as an update to the Y710 laptop, with the most significant differences being a chipset update to Intel PM45 and the ability to use DDR3 memory. The laptop offered: Processor: Intel Core 2 X9100 (3.06GHz) RAM: 3GB DDR3 Graphics: ATI Radeon HD 3650 XT Display: 17" TN LCD Storage: SATA, Up to 500GB HDD Reviewers disagreed on its capacity for gaming. About.com indicated that it was not very fast for high resolution PC gaming, suggesting that it was better suited for casual gamers and viewing HD videos. The screen was also indicated as being a lower resolution than industry standard. On the other hand, the reviewer at GADGETBASE was extremely enthusiastic about the laptop, calling it "the ultimate notebook" with "stellar performance" for "a die-hard gamer". 2009 The Y-series laptops launched in 2009 by Lenovo were the Y450 and Y550. Y450 The successor to the Y430, the Y450 laptop offered the following specifications: Processor: Intel Core 2 Duo T6400 Display: 14" LED-backlit TN LCD Memory: Up to 3GiB DDR3 1066MT/s Graphics: Intel GMA 4500MHD (integrated) Storage: Up to 250GB 5400RPM SATA Wireless: Intel Wireless Wi-Fi Link 5100; Bluetooth Version 2.0 + EDR Weight: Dimensions: Operating System: Microsoft Windows 7 Home Premium 32-bit or 64-bit PC World gave the laptop a rating of 2.5 of 5 stars, praising the keyboard, design, and overall value. The negative points were indicated as being an uneven vertical viewing angle. Y550 Released in 2009, the IdeaPad Y550 laptop offered the following specifications: Processor: 2.0GHz Intel Core 2 Duo T6400 Memory: Up to 8GB DDR3 Storage: Up to 320GB SATA Display: LED-backlit TN LCD Graphics: Intel X4500MHD (integrated) Weight: Dimensions: Notebook Review called the IdeaPad Y550 laptop well-built, with a wide array of options. The design was also appreciated and as with previous IdeaPad Y Series laptops, both the keyboard and touchpad were positively received. 2008 The Y Series laptops launched in 2008 by Lenovo were the Y710, Y510, Y530, and Y430. Y430 The IdeaPad Y430 featured a 14.1 inch screen, an Intel Core 2 Duo T5800 processor, Intel GMA 4500MHD graphics, and weighed . PC World was enthusiastic in its review of the Y430 notebook, calling it "among the best midsized laptops available" and "a joy to use". Summing up the notebook's capabilities, PC World said, "This is a solidly built unit that's a joy to use and has plenty of grunt for most applications. It also has versatile networking options, including the ability to connect to 5GHz IEEE 802.11n Wi-Fi routers." Y510 The Y510 notebook offered the following specifications: Processor: Intel Core 2 Duo T2330, T5450, or T5550 Memory: Up to 4GiB Graphics: Intel X3100 (integrated, up to 256MiB shared video RAM) Display: 15.4" TN LCD Storage: Up to 250GB SATA HDD Optical drive: DVD-Burner Battery: 6-cell battery, up to 4 hours of life Weight: Dimensions: Operating system: Microsoft Windows Vista Home Premium (32-bit) Y530 The Y530 notebook was the successor to the Y510, with the same chassis but with an upgrade to the Intel Centrino 2 processor. While the notebook was slightly thicker than other, similar laptops, it was still portable and easy to carry around. The notebook weighed and had a form factor of . Notebook Review stated that the positive points of the Y530 notebook were the build quality, the speaker system, and the comfortable keyboard and touchpad. The negative points were the NVIDIA 9300M graphics card, and the highly reflective display. Y710 The first type of Y-series laptops was the Y7xx models, including the Y710 and Y730. The Y710 have a optional "Lenovo Game Zone" module (factory mounted in a keypad module space) and offered the following specifications: Processor: Intel Core 2 Duo T5450, T8100 or T9300 Memory: Up to 8GiB DDR2 Graphics: ATI Mobility Radeon HD 2600 (256MiB VRAM) Display: 17" (1610) TN LCD Storage: Up to 100GB SATA HDD Optical drive: DVD burner or Blu-ray Battery: 6-cell battery, up to 4 hours Weight: Dimensions: Operating system: Microsoft Windows Vista Home Premium (32-bit) References External links IdeaPad Y series on Lenovo.com Ideapad Y50 on Lenovo.com Ideapad Y40 on Lenovo.com Ideapad Y500 on Lenovo.com Y Discontinued products
25212262
https://en.wikipedia.org/wiki/1930%20Toronto%20municipal%20election
1930 Toronto municipal election
Municipal elections were held in Toronto, Ontario, Canada, on January 1, 1930. In a close mayoral election Bert Wemp ousted two term incumbent Sam McBride. The main issue of the election was a proposed downtown beautification scheme that would have rebuilt roads in the core. The proposal was rejected in a referendum after voters in the suburbs voted against it. McBride was the plan's leading proponent, and its rejection hurt his reelection bid. Toronto mayor McBride had been elected mayor in 1928 and had been in office two years. He was defeated by controller and Toronto Telegram editor Bert Wemp by 4,378 votes. Also running was controller A.E. Hacker, but he finished in distant third. Results Bert Wemp - 54,309 Sam McBride - 49,933 Albert E. Hacker - 3,210 Board of Control Only one member of the Board of Control elected in the last election was running for reelection: W.A. Summerville. Hacker and Wemp had both chosen to run for mayor. Joseph Gibbons had been appointed to the board of Toronto Hydro and was replaced mid-term by Alderman Frank Whetter, but he was defeated when he tried to run for a full term. Elected were two candidates considered representatives of labour: James Simpson and William D. Robbins. The other new Controller was Claude Pearce, who had strong support from Roman Catholic voters. Results W.A. Summerville (incumbent) - 47,418 Claude Pearce - 46,692 James Simpson - 44,921 William D. Robbins - 39,023 Benjamin Miller - 37,156 Frank Whetter (incumbent) - 31,772 Brook Sykes - 28,043 Wesley Benson - 25,054 Harry Bradley - 2,617 City council Ward 1 (Riverdale) Robert Siberry (incumbent) - 8,567 Robert Allen (incumbent) - 7,187 Lorne Trull - 6,382 Frank M. Johnston (incumbent) - 5,047 William Taylor - 3,184 Harry Perkins - 1,837 Ward 2 (Cabbagetown and Rosedale) John R. Beamish (incumbent) - 6,754 John Winnett (incumbent) - 5,972 James Cameron (incumbent) - 5,017 Joseph Miller - 5,386 Robert Yeomans - 4,191 Hugh Sutherland - 3,486 Frank Gallagher - 1,010 Ward 3 (Central Business District) J. George Ramsden - 6,256 Harry W. Hunt (incumbent) - 3,562 Andrew Carrick (incumbent) - 4,286 H.L. Rogers - 4,211 George Yorke - 4,102 Wallace Kennedy - 2,576 Ward 4 (Kensington Market and Garment District) Samuel Factor (incumbent) - 4,022 Nathan Phillips (incumbent) - 3,995 Charles Ward - 3,386 John McMulkin - 3,086 George King - 1,941 Jacob Romer - 1,124 Ward 5 (Trinity-Bellwoods) William James Stewart (incumbent) - 6,060 Fred Hamilton (incumbent) - 5,035 Robert Leslie - 5,013 Louis Fine - 3,825 James Phinnemore - 3,111 Garnet Archibald - 2,765 Mary McNab - 1,161 Max Shur - 317 Ward 6 (Davenport and Parkdale) Joseph Wright (incumbent) - 10,576 D.C. MacGregor - 8,330 John Boland (incumbent) - 7,325 John Laxton (incumbent) - 7,125 S.I. Wright - 5,404 Joseph King - 978 James Gill - 912 Albert Smith - 550 Ward 7 (West Toronto Junction) William J. Wadsworth (incumbent) - 5,701 Alexander Chisholm (incumbent) - 4,576 Samuel Ryding (incumbent) - 4,381 John Whetton - 2,861 George Watson - 830 Ward 8 (East Toronto) Walter Howell (incumbent) - 7,921 Ernest Bray - 7,569 Albert Burnese (incumbent) - 7,123 Robert Baker (incumbent) - 7,037 William Robertston - 2,978 Results taken from the January 2, 1930 Toronto Star and might not exactly match final tallies. References Election Coverage. Toronto Star. January 2, 1930 1930 elections in Canada 1930 1930 in Ontario
39677141
https://en.wikipedia.org/wiki/Mocana
Mocana
Mocana (founded 2002) is a San Jose-based company that focuses on and embedded system security for industrial control systems and the Internet of Things (IoT). One of its main products, the IoT Security Platform, is a high-performance, ultra-optimized, OS-independent, high-assurance security platform that is intended to support all device classes. This decoupling of the security implementation from the rest of application development allows for easier development of software for devices comprising the "Internet of Things", in which numerous independent networked devices communicate with each other in various ways. Mocana was originally launched as an embedded systems security company, but as of the early 2010s, the company has shifted its focus to protecting mobile devices and the apps and data on them. History Mocana introduced its products in 2004 with a focus on embedded systems security. That same year the company launched Embedded Security Suite, a software product to secure communications between networked devices. In February 2005, while based in Menlo Park, California, the company joined the Freescale Semiconductor Developers Alliance Program, and delivered that group's first security software. In 2008, Mocana was cited as an example of how an independent company could provide security for smartphones. Mocana CEO Adrian Turner published an article in the San Jose Mercury News on the risks associated with non-PC networked devices; and the New York Times reported that Mocana's researchers had "discovered they could hack into a best-selling Internet-ready HDTV model with unsettling ease," and highlighted the opportunity for criminals to intercept information like credit card billing details. Media outlets across the U.S. cited this point in their coverage of the risks associated with advances in technology. Mocana sponsored the 7th Workshop on RFID Security and Privacy at the University of Massachusetts in 2011. It launched the Mobile Application Protection platform in 2011 with support for Android apps, and added iOS app support in 2012. Following a Series D funding round in 2012, total investment in Mocana was $47 million. New CEO James Isaacs replaced Turner in September 2013. Interim CEO Peter Graham replaced Isaacs in April 2016. In April 2016, Mocana spun off its mobile security business to Blue Cedar Networks. William Diotte replaced Graham as CEO in May 2016. Mocana was originally based in San Francisco but moved to Sunnyvale in December 2017 and later to San Jose. The company was acquired by DigiCert in January 2022. Products and services Mocana's IoT Security Platform is a security software suite for embedded systems. The software provides the cryptographic controls (e.g. authentication, confidentiality, encryption, and device and data integrity verification) for embedded devices and applications. The company also offers customizable user agreements and optional FIPS 140-2 validated cryptographic engines. Access to application source code is not required. The product's design is based in the assumption that many assurances of security from the device and its operating system may be compromised. This obviates the necessity of having "infallible" system-wide security policies. In addition, Mocana offers consulting services, evaluating and advising on security threats in networked devices. Industries served Mocana's security technology is used in airplane in-flight entertainment systems, medical devices, battlefield communications, automobile firmware, and cell phone carrier networks. Mocana senior analyst Robert Vamosi was cited in a 2011 piece in Bloomberg Businessweek comparing tech companies' approaches to security. Funding Mocana's investors include Trident Capital (2012), Intel Capital (2011), Shasta Ventures, Southern Cross Venture Partners, and Symantec (2010). As of the August 2012 Series D, a total of $47 million has been raised. Awards, recognition, and accomplishments Recognized by Frost & Sullivan as the leading IoT security platform for industrial manufacturing and automation in 2017 Mocana named most innovative security company by Leading Lights in 2017 Named as to the OnDemand 100 in 2013. Recognized by the World Economic Forum as a 2012 Technology Pioneer Named to the "Red Herring Global 100" in 2008. Authored by Mocana personnel Mocana senior analyst Robert Vamosi published the book "When Gadgets Betray Us: The Dark Side of Convenience" in 2011. Mocana CEO Adrian Turner published the book Blue Sky Mining in 2012. Mocana engineer Dnyanesh Khatavkar presented the paper Quantizing the throughput reduction of IPSec with mobile IP at the 2002 (45th) Midwest Symposium on Circuits and Systems, an IEEE conference. References Software companies based in the San Francisco Bay Area Computer security companies 2002 establishments in California Companies based in San Francisco Software companies of the United States Companies based in Sunnyvale, California
4028737
https://en.wikipedia.org/wiki/Computer%20Lib/Dream%20Machines
Computer Lib/Dream Machines
Computer Lib/Dream Machines is a 1974 book by Ted Nelson, printed as a two-front-cover paperback to indicate its "intertwingled" nature. Originally self-published by Nelson, it was republished with a foreword by Stewart Brand in 1987 by Microsoft Press. In Steven Levy's book Hackers, Computer Lib is described as "the epic of the computer revolution, the bible of the hacker dream. [Nelson] was stubborn enough to publish it when no one else seemed to think it was a good idea." Published just before the release of the Altair 8800 kit, Computer Lib is often considered the first book about the personal computer. Background Prior to the initial release of Computer Lib/Dream Machines, Nelson was working on the first hypertext project, Project Xanadu, founded in 1960. An integral part to the Xanadu vision was computing technology and the freedom he believed came with it. These ideas were later compiled and elaborated upon in the 1974 text, around the time when locally networked computers had appeared and Nelson found global networks as a space for the hypertext system. Synopsis Computer Lib In Computer Lib. You can and must understand computers NOW, Nelson covers both the technical and political aspects of computers. Nelson attempts to explain computers to the laymen during a time when personal computers had not yet become mainstream and anticipated the machine being open for anyone to use. Nelson writes about the need for people to understand computers more deeply than was generally promoted as computer literacy, which he considers a superficial kind of familiarity with particular hardware and software. His rallying cry "Down with Cybercrud" is against the centralization of computers such as that performed by IBM at the time, as well as against what he sees as the intentional untruths that "computer people" tell to non-computer people to keep them from understanding computers. Dream Machines Dream Machines. New Freedom through Computer Screens- a Minority Report, is the opposite of the Computer Lib side. Nelson explores what he believes is the future of computers and the alternative uses for them. This side was his counterculture approach to how computers had typically been used. Nelson covers the flexible media potential of the computer, which was shockingly new at the time. He saw the use of hypermedia and hypertext, both terms he coined, being beneficial for creativity and education. He urged readers to look at the computer not as just a scientific machine, but as an interactive machine that can be accessible to anyone. In this section, Nelson also described the details of Project Xanadu. He proposed the idea of a future Xanadu Network, where users could shop at Xanadu stands and access material from global storage systems. Format Both the 1974 and 1987 editions have an unconventional layout, with two front covers. The Computer Lib cover features a raised fist in a computer. Once flipped over, the Dream Machines cover shows a man with a cape flying with a finger pointed to a screen. The division between the two sides is marked by text (for the other side) rotated 180°. The book was stylistically influenced by Stewart Brand's Whole Earth Catalog. The text itself is broken up into many sections, with simulated pull-quotes, comics, sidebars, etc., similar to a magazine layout. According to Steven Levy, Nelson's format requirements for the book's "over-sized pages loaded with print so small you could hardly read it, along with scribbled notations, and manically amateurish drawings" may have contributed to the difficulty of finding a publisher for the first edition - Nelson paid 2,000 dollars out of his own pocket for the first print run of several hundred copies. Besides the Whole Earth Catalog, the layout also bore similarities to the People's Computer Company (PCC) newsletter, published by a Menlo Park based group of the same name, where Nelson's book would gain (as described by Levy) "a cult following ... Ted Nelson was treated like royalty at [PCC] potluck dinners." Neologisms In Computer Lib, Nelson introduced a few words that he coined : Cybercrud: "the author's own term for the practice of putting things over on people using computers (especially, forcing them to adapt to a rigid, inflexible, poorly thought out system)". In the text, Nelson puts forth the rallying cry "Down with Cybercrud!" Hypertext: originally coined in 1965, is text displayed which references other information that a user can access. Nelson explores the types of the term and its future in computers greatly within Computer Lib. Some include: Chunk style consists of 'chunks' of separate text or media connected by links. Stretch text is text that extends itself. Instead of linking, it zooms in depending on the detail needed. Intertwingularity: Nelson says "Everything is deeply intertwingled". He says that all subjects and information are connected. The term comes from the merging of intertwined and intermingled. Fantics: "the art and science of getting ideas across, both emotionally and cognitively". Nelson explains this as the audience receiving feelings while also receiving information from content. Legacy After its release, it drew an underground following from media theorists to computer hackers. In his book Tools for Thought, Howard Rheingold calls Computer Lib "the best-selling underground manifesto of the microcomputer revolution." It has since been referred to as "the most influential book in the history of computational media", as well as "the most important book in the history of new media" in The New Media Reader. One of the most widely adopted ideas from Computer Lib was Ted Nelson's "chunk-style" hypertext. This type of hypertext is used in most websites today. As the book came out before the first personal computer and its rise in popularity, Nelson has been credited with predicting how we interact with computers in terms of arts and entertainment, like video games. He was one of the first to present the computer as an "all-purpose machine". References Citations Bibliography BYTE Magazine, October 1975 External links Computer Lib/Dream Machines Retrospective - Excerpts from "Computer Lib" Xanadu, Network Culture, and Beyond. Chapter from Tools for Thought (book on history of computers) by Howard Rheingold 1974 non-fiction books Texts related to the history of the Internet Books by Ted Nelson Self-published books Microsoft Press books Tête-bêche books
1250768
https://en.wikipedia.org/wiki/JMonkeyEngine
JMonkeyEngine
jMonkeyEngine (jME) is a game engine made especially for modern 3D development, as it uses shader technology extensively. 3D games can be written for both Android and desktop devices using this engine. jMonkeyEngine is written in Java and uses LWJGL as its default renderer (another renderer based on JOGL is available). OpenGL 2 through OpenGL 4 is fully supported. jMonkeyEngine is a community-centric open-source project released under the new BSD license. It is used by several commercial game studios and educational institutions. The default jMonkeyEngine 3 download comes readily integrated with an advanced SDK. jMonkeyEngine 3 SDK By itself, jMonkeyEngine is a collection of libraries, making it a low-level game development tool. Coupled with an IDE like the official jMonkeyEngine 3 SDK it becomes a higher level game development environment with multiple graphical components. The SDK is based on the NetBeans Platform, enabling graphical editors and plugin capabilities. Alongside the default NetBeans update centers, the SDK includes its own plugin repository and a selection between stable point releases or nightly updates. Since March the 5th of 2016, the SDK is not officially supported anymore by the core team. Ever since then it is still being actively maintained by the community. Note: The "jMonkeyPlatform" and the "jMonkeyEngine 3 SDK" are exactly the same thing. History jMonkeyEngine was built to fulfill the lack of full featured graphics engines written in Java. The project has a distinct two-part story, as the current core development team includes none of the original creators. jMonkeyEngine 0.1 – 2.0 Version 0.1 to 2.0 of jMonkeyEngine marks the time from when the project was first established in 2003, until the last 2.0 version was released in 2008. When the core developers at that time gradually discontinued work on the project throughout the end of 2007 and the beginning of 2008, the 2.0 version had not yet been made officially stable. Regardless, the codebase became adopted for commercial use and the community actively supported the 2.0 version more than any other. jMonkeyEngine 3.0 Since the departure of jME's core developers in late 2008 the codebase remained practically stagnant for several months. The community kept committing patches, but the project was not moving in any clear direction. Version 3.0 started as nothing more than an experiment. The first preview release of jME3 in early 2009 created a lot of buzz in the community, and the majority agreed that this new branch would be the official successor to jME 2.0. From there on all the formalities were sorted out between the previous core developers and the new. The jME core team is now composed of eight committed individuals. Projects powered by jMonkeyEngine Nord, a browser-based MMO on Facebook, created by Skygoblin. Grappling Hook, a first-person action & puzzle game, accomplished by a single independent developer. Drohtin, Realtime Strategy Game (RTS), Singleplayer/Multiplayer. Build your own village and be a great leader of your citizens. Chaos, a 3D fantasy cooperative game based RPG by 4Realms. Skullstone, retro styled single player dungeon crawler game with modern 3D graphics, created by Black Torch Games. Spoxel, a 2D action-adventure sandbox video game, created by Epaga Games. Lightspeed Frontier, a space sandbox game with RPG, building, and exploration elements, created by Crowdwork Studios. Subspace Infinity, a 2d top down space fighter mmo. Reception JavaOne 2008 Presentation Finalist in PacktPub Open Source Graphics Software Award 2010 Ardor3D fork Ardor3D began life September 23, 2008 as a fork from the jMonkeyEngine by Joshua Slack and Rikard Herlitz due to what they perceived as irreconcilable issues with naming, provenance, licensing, and community structure in that engine, as well as a desire to back a powerful open-source Java engine with organized corporate support. The first public release came January 2, 2009, with new releases following every few months thereafter. In 2011, Ardor3D was used in the Mars Curiosity mission both by NASA Ames and NASA JPL, for visualizing terrain and rover movement. On March 11, 2014, Joshua Slack announced that the project would be abandoned, although the software itself would remain under zlib license and continue to be freely available. However, a subset of Ardor3D called "JogAmp's Ardor3D Continuation" is still actively maintained by Julien Gouesse. References External links 2003 software 3D scenegraph APIs Free 3D graphics software Free game engines Free software programmed in Java (programming language) Java (programming language) libraries Video game engines Software using the BSD license