id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
2354870
https://en.wikipedia.org/wiki/Sudan%20University%20of%20Science%20and%20Technology
Sudan University of Science and Technology
Sudan University of Science and Technology (abbreviated SUST) is one of the largest public universities in Sudan, with ten campuses in Khartoum state. The main campus is located in the so-called Al Mugran area of Khartoum, the confluence of the White Nile and the Blue Nile. History SUST was founded in colonial Sudan as the Khartoum Technical School and School of Commerce in 1902. Later, the School of Radiology (1932) and School of Design (1946) and School of Commerce merged with the Khartoum Technical School to form the Khartoum Technical Institute (KTI) in 1950. The Shambat Institute of Agriculture (1954), Khartoum Senior Trade School (1962), Institute of Music and Drama and the Higher Institute of Physical Education (1969) were also added and renamed as Khartoum Polytechnic Institute (KP) in 1975. In 1990, this became the Sudan University of Science and Technology. Colleges and Schools College of Graduate Studies College of Engineering College of Petroleum Engineering and Technology College of Water and Environmental Engineering College of Industries Engineering and Technology College of Architecture and Planning College of Computer Science and Information Technology College of Medicine College of Dentistry College of Pharmacy College of Medical Laboratories Science College of Medical Radiologic Sciences College of Science College of Agricultural Studies College of Veterinary Medicine College of Animal Production Science and Technology College of Forestry and Range Science College of Business Studies College of Languages College of Education College of Fine and Applied Arts College of Music and Drama College of Physical Education and Sports College of Communication Science College of Technology Campuses SUST has 10 campuses located in Khartoum, the capital and largest city of Sudan, as well as in Khartoum State. The main campus is located in Al Mugran area of Khartoum. Main Campus Vice chancellor and principal offices Secretariat of Academic Affairs Deanship of Admission and Registration Deanship of Distance Education Deanship of Library Affairs Deanship of Quality and Development Deanship of Scientific Research Deanship of Students Affairs College of Business Studies College of Computer Science and Information Technology College of Education College of Fine and Applied Art College of Graduate Studies College of Languages College of Technology College of Science Institute of Laser Institute of Islamic Science and Research Institute for Family and Community Development Computer Center Southern Campus College of Engineering College of Physical Education and Sports College of Petroleum Engineering and Technology Music and Drama Campus College of Music and Drama (studios and theatre) College of Communication Science Institute of Studies and Culture of Peace SUST Radio Adminstration Kuku Campus College of Veterinary Medicine College of Animal Production Science and Technology Kuku Zoo Northern Campus Textile Engineering Department Plastic Engineering Department Leather Engineering Department Leather Incubator Shambat Campus College of Agricultural Studies research centre soil laboratory Wad-Al-Maqbul Campus College of Water and Environmental Engineering Meteorology Center Medical and Health Campus College of Medicine Collage of Pharmacy College of Dentistry Radiologic Science Campus College of Medical Radiologic Science Ultrasound Clinic Forestry Campus College of Forestry and Range Science Facilities Libraries SUST libraries are part of each campus and managed by the Deanship of Libraries Affairs. The Libraries’ collection includes books, eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, a sizable map collection and a documentary department. In particular, SUST libraries include 12 distinct facilities: Library of Veterinary Medicine and Animal Production Library of Agricultural Studies Library of Forestry and Range Science Library of Engineering Library of Industrial Engineering Library of Medical Radiological Sciences Library of Water Science and Technology Library of Physical Education Library of Petroleum Engineering and Technology Library of Communication Science Library of Computer Science and Information Technology Library of Medical Sciences Also, SUST has a distinguished digital repository. Sports Facilities SUST is home to the only college for physical education and sports in Sudan. Therefore, the university has some of the best sports facilities in the country. The college strives to provide quality training, as well as the resources for the development, coaching and playing of sport. Many of these facilities are available to students, staff and members of the public. The university's main sports facilities are primarily located at the southern campus. SUST has the following indoor and outdoor facilities Sports halls Swimming pools Grass pitches Sand-dressed pitches Climbing and bouldering wall State-of-the-art fitness suites Pavilions Reputation Ranking SUST ranked first of Sudanese universities in Webometrics classification of international universities and institutes successively in July 2021 and in January 2022 SUST has been listed in the eighteenth QS World University Rankings in May 2021. Digital Repository Ranking In May 2021, SUST ranked first among Sudanese universities in May 2021 edition of Webometrics rankings of digital repositories, where it ranked 190 out of 4,403 institutional repositories and ranked 205 out of 4,579 repositories globally. UNESCO Chair The university hosts the UNESCO Chair for Women in Science and Technology , which was established at the initiative of Prof. Fatima Abdel Mahmoud, who later became the first chairholder. Notable People Notable Academics Prof. Elfatih Hussien: Sudanese musician, author and critic. One of the pillars of music in Sudan and dean of the college of music and drama Notable Alumni Abdel Aziz El Mubarak: Sudanese singer Abdul Hakim Al-Taher: Sudanese theater director and actor Adel Harbi: Sudanese theater and TV director and scholar Afaf Al-Sadiq: University scholar Ali Madhi Nouri: Sudanese actor, stage director who was designated UNESCO Artist for Peace in October 2012 Atif Elhaj Saeed: Sudanese award winning novelist Kamala Ibrahim Ishaq: Sudanese artist Mazahir Salih: Sudanese-American Community activist and Iowa City Councilmember Taysir El Nourani: Minister of Labour and Administrative Reform of Sudan Affiliations with international universities Multimedia University, Malaysia Limkokwing University of Creative Technology, Malaysia International Centre for Theoretical Physics (ICTP), Italy Norwegian University of Science and Technology University of Tromsø, Norway University of Oslo, Norway University of Tennessee, United States Boston University, U.S. California Institute of Technology, U.S. Accreditation African Quality Rating Mechanism (AQRM), 2017 Ministry of Higher Education and Scientific Research, Sudan World Textile University Alliance Educause Membership for International Institution References External links The official website for Sudan University of Science and Technology Sudan University students and graduates community forum Sudanet.info website made to connect all Sudanese university students and grads together and benefit all students by providing lectures, e-books and access to email Sudan Virtual Engineering Library website Sudan virtual library National universities Universities and colleges in Sudan Science and technology in Sudan Scientific organisations based in Sudan Forestry education Veterinary schools Educational institutions established in 1932 Education in Khartoum 1932 establishments in the British Empire
6012146
https://en.wikipedia.org/wiki/Fishbrain
Fishbrain
Fishbrain is a mobile app, and online platform for anglers that provides map-based tools, social networking features, fishing forecasts, and data-backed recommendations on fishing gear. Using Fishbrain, anglers can find new spots to fish and see exact catch locations on maps, which include depth information and user-generated tips and ratings. In addition to map functionality, users can obtain fishing forecasts including weather, lunar cycles, tidal charts, and more. Fishbrain also provides recommendations on when to fish based on predicted fish activity, previous user-catches, and an analysis of species behavior. In 2019 Fishbrain added a marketplace where anglers can buy fishing gear. This marketplace is offered online and in-app and features gear recommendations based on real-world catch data. This allows anglers to see which gear is best for catching specific species, or for specific fishing tactics. Due to the platform’s user base of over 11 million users, Fishbrain has access to a variety of data on species, migratory patterns, and more. Fishbrain works with governmental agencies to help identify endangered fish species, how climate is impacting behavioral patterns and more. The company promotes sustainable fishing through promoting a catch-and-release approach, as well as encouraging users to clean up and care for their local waterways. Fishbrain is the foursquare of fishing with a unique knowledge collection of fishing spots and catches connected to them. Fishbrain was created by Jens Persson and Johan Attby, and launched in 2010 as a free mobile app. and has expanded to include utility functions for recreational fishing with some of its features available with a subscription. The Fishbrain app can be downloaded from the App Store, as well as Google Play and is available for iPhone, iPad, and Android devices. References External links 2010 software Android (operating system) software Internet properties established in 2010 IOS software Photo software Image sharing websites Swedish social networking websites Mobile software Proprietary cross-platform software Swedish brands Companies based in Stockholm Software companies of Sweden Fishing
4120803
https://en.wikipedia.org/wiki/CDC%206000%20series
CDC 6000 series
The CDC 6000 series is a discontinued family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each is a large, solid-state, general-purpose, digital computer that performs scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series. Overview The CDC 6000 series computers are composed of four main functional devices: the central memory one or two high-speed central processors ten peripheral processors (Peripheral Processing Unit, or PPU) and a display console. The 6000 series uses "reduced instruction set" (RISC) many years before such a term was invented and have a distributed architecture. The family's members differ primarily by the number and kind of central processor(s): The CDC 6600 is a single CPU with 10 functional units that can operate in parallel, each working on an instruction at the same time. The CDC 6400 is a single CPU with an identical instruction set, but with a single unified arithmetic function unit that can only do one instruction at a time. The CDC 6500 is a dual-CPU system with two 6400 central processors The CDC 6700 is also a dual-CPU system, with a 6600 and a 6400 central processor. Certain features and nomenclature had also been used in the earlier CDC 3000 series: Arithmetic was ones complement. The name COMPASS was used by CDC for the assembly languages on both families. The name SCOPE was used for its implementations on the 3000 and 6000 series. The only currently (as of 2018) running CDC 6000 series machine, a 6500, has been restored by Living Computers: Museum + Labs It was built in 1967 and used by Purdue University until 1989 when it was decommissioned and then given to the Chippewa Falls Museum of Industry and Technology before being purchased by Paul Allen for LCM+L. History The first member of the CDC 6000 series was the supercomputer CDC 6600, designed by Seymour Cray and James E. Thornton in Chippewa Falls, Wisconsin. It was introduced in September 1964 and performs up to three million instructions per second, three times faster than the IBM Stretch, the speed champion for the previous couple of years. It remained the fastest machine for five years until the CDC 7600 was launched. The machine was Freon refrigerant cooled. Control Data manufactured about 100 machines of this type, selling for $6 to $10 million each. The next system to be introduced was the CDC 6400, delivered in April 1966. The 6400 central processor is a slower, less expensive implementation with serial processing, rather than the 6600's parallel functional units. All other aspects of the 6400 are identical to the 6600. Then followed a machine with dual 6400-style central processors, the CDC 6500, designed principally by James E. Thornton, in October 1967. And finally, the CDC 6700, with both a 6600-style CPU and a 6400-style CPU, was released in October 1969. Subsequent special edition options were custom-developed for the series, including: Attaching a second system configured without a Central Processor (numbered 6416 and identified as "Augmented I/O Buffer and Control) to the first; the combined total effectively was 20 peripheral and control processors with 24 channels, and the purpose was to support additional peripherals and "significantly increase the multiprogramming and batch job processing of the 6000 series." (A 30-PPU, 36 channel 6600 machine was operated by Control Data's Software Research Lab during 1971–1973 as the Minneapolis Cybernet host, but this version was never sold commercially.) Control Data also marketed a CDC 6400 with a smaller number of peripheral processors: CDC 6415–7 with seven peripheral processors CDC 6415–8 with eight peripheral processors CDC 6415–9 with nine peripheral processors Hardware Central memory (CM) In all the CDC 6000 series computers, the central processor communicates with around seven simultaneously active programs (jobs), which reside in central memory. Instructions from these programs are read into the central processor registers and are executed by the central processor at scheduled intervals. The results are then returned to central memory. Information is stored in central memory in the form of words. The length of each word is 60 binary digits (bits). The highly efficient address and data control mechanisms involved permit a word to be moved into or out of central memory in as little as 100 nanoseconds. Extended Core Storage (ECS) An extended core storage unit (ECS) provides additional memory storage and enhances the powerful computing capabilities of the CDC 6000 series computers. The unit contains interleaved core banks, each one ECS word (488 bits) wide and an 488 bit buffer for each bank. While nominally slower than CM, ECS included a buffer (cache) that in some applications gave ECS better performance than CM. However, with a more common reference pattern the CM was still faster. Central processor The central processor was the high-speed arithmetic unit that functioned as the workhorse of the computer. It performed the addition, subtraction, and logical operations and all of the multiplication, division, incrementing, indexing, and branching instructions for user programs. Note that in the CDC 6000 architecture, the central processing unit performed no input/output (I/O) operations. Input/Output was totally asynchronous, and performed by peripheral processors. A 6000 series CPU contained 24 operating registers, designated X0-X7, A0-A7, and B0-B7. The eight X registers were each 60 bits long, and used for most data manipulation—both integer and floating point. The eight B registers were 18 bits long, and generally used for indexing and address storage. Register B0 was hard-wired to always return 0. By software convention, register B1 was generally set to 1. (This often allowed the use of 15-bit instructions instead of 30-bit instructions.) The eight 18-bit A registers were 'coupled' to their corresponding X registers in an interesting way: setting an address into any of registers A1 through A5 caused a memory load of the contents of that address into the corresponding X registers. Likewise, setting an address into registers A6 and A7 caused a memory store into that location in memory from X6 or X7. Registers A0 and X0 were not coupled in this way, so could be used as scratch registers. However A0 and X0 were used when addressing CDCs Extended Core Storage (ECS). Instructions were either 15 or 30 bits long, so there could be up to four instructions per 60-bit word. A 60-bit word could contain any combination of 15-bit and 30-bit instructions that fit within the word, but a 30-bit instruction could not wrap to the next word. The op codes were six bits long. The remainder of the instruction was either three three-bit register fields (two operands and one result), or two registers with an 18-bit immediate constant. All instructions were 'register to register'. For example, the following COMPASS (assembly language) code loads two values from memory, performs a 60-bit integer add, then stores the result: SA1 X SET REGISTER A1 TO ADDRESS OF X; LOADS X1 FROM THAT ADDRESS SA2 Y SET REGISTER A2 TO ADDRESS OF Y; LOADS X2 FROM THAT ADDRESS IX6 X1+X2 LONG INTEGER ADD REGISTERS X1 AND X2, RESULT INTO X6 SA6 A1 SET REGISTER A6 TO (A1); STORES X6 TO X; THUS, X += Y The central processor used in the CDC 6400 series contained a unified arithmetic element which performed one machine instruction at a time. Depending on instruction type, an instruction could take anywhere from a relatively fast five clock cycles (18-bit integer arithmetic) to as many as 68 clock cycles (60-bit population count). The CDC 6500 was identical to the 6400, but included two identical 6400 CPUs. Thus the CDC 6500 could nearly double the computational throughput of the machine. (But not the I/O throughput; that was still limited by the slowness of external I/O devices served by the same 10 PPs/12 Channels. But many CDC customers worked on compute-bound problems; the 6500 was perfect for them.) The CDC 6600 computer, like the CDC 6400, has just one central processor. However, its central processor offered much greater efficiency. The processor was divided into 10 individual functional units, each of which was designed for a specific type of operation. All 10 functional units could operate simultaneously, each working on their own operation. The function units provided were: branch, Boolean, shift, long integer add, floating-point add, floating-point divide, two floating-point multipliers, and two increment (18-bit integer add) units. Functional unit latencies were between a very fast three clock cycles (increment add) and 29 clock cycles (floating-point divide). The 6600 processor could issue a new instruction every clock cycle, assuming that various processor (functional unit, register) resources were available. These resources were kept track of by a scoreboard mechanism. Also contributing to keeping the issue rate high was an instruction stack, which cached the contents of eight instruction words (32 short instructions or 16 long instructions, or a combination). Small loops could reside entirely within the stack, eliminating memory latency from instruction fetches. Both the 6400 and 6600 CPUs had a cycle time of 100 ns (10MHz). Due to the serial nature of the 6400 CPU, its exact speed was heavily dependent on instruction mix, but generally around 1 MIPS. Floating-point additions were fairly fast at 11 clock cycles, however floating-point multiplication was very slow at 57 clock cycles. Thus its floating point speed would depend heavily on the mix of operations and could be under 200 kFLOPS. The 6600 was, of course, much faster. With good compiler instruction scheduling, the machine could approach its theoretical peak of 10 MIPS. Floating-point additions took four clock cycles, and floating-point multiplies took 10 clocks (but there were two multiply functional units, so two operations could be processing at the same time.) The 6600 could therefore have a peak floating point speed of 2-3 MFLOPS. The CDC 6700 computer combined the best features of the other three computers. Like the CDC 6500, it had two central processors. One was a CDC 6400/CDC 6500 central processor with the unified arithmetic section; the other was the more efficient CDC 6600 central processor. The combination made the CDC 6700 the fastest and the most powerful of the CDC 6000 series. Peripheral processors The central processor shares access to central memory with up to ten peripheral processors (PPs). Each peripheral processor is an individual computer with its own 1 μs memory of 4K words, each with 12 bits. (They were somewhat similar to CDC 160A minicomputers, sharing the 12-bit word length and portions of the instruction set.) While the PPs were designed as an interface to the 12 I/O channels, portions of the Chippewa Operating System (COS), and systems derived from it, e.g., SCOPE, MACE, KRONOS, NOS, and NOS/BE, ran on the PPs. Only the PPs have access to the channels and can perform input/output: the transfer of information between central memory and peripheral devices such as disks and magnetic tape units. They relieve the central processor of all input/output tasks, so that it can perform calculations while the peripheral processors are engaged in input/output and operating system functions. This feature promotes rapid overall processing of user programs. Much of the operating system ran on the PPs, thus leaving the full power of the Central Processor available for user programs. Each peripheral processor can add, subtract, and perform logical operations. Special instructions perform data transfer between processor memory and, via the channels, peripheral devices at up to 1 μs per word. The peripheral processors are collectively implemented as a barrel processor. Each executes routines independently of the others. They are a loose predecessor of bus mastering or direct memory access. Instructions used a 6-bit op code, thus leaving only 6 bits for an operand. It was also possible to combine the next word's 12 bits, to form an 18-bit address (the size needed to access the full 131,072 words of Central Memory). Data channels For input or output, each peripheral processor accesses a peripheral device over a communication link called a data channel. One peripheral device can be connected to each data channel; however, a channel can be modified with hardware to service more than one device. Each peripheral processor can communicate with any peripheral device if another peripheral processor is not using the data channel connected to that device. In other words, only one peripheral processor at a time can use a particular data channel. Display console In addition to communication between peripheral devices and peripheral processors, communication takes place between the computer operator and the operating system. This was made possible by the computer console, which had two CRT screens. This display console was a significant departure from conventional computer consoles of the time, which contained hundreds of blinking lights and switches for every state bit in the machine. (See front panel for an example.) By comparison, the 6000 series console was an elegant design: simple, fast and reliable. The console screens were calligraphic, not raster based. Analog circuitry steered the electron beams to draw the individual characters on the screen. One of the peripheral processors ran a dedicated program called "DSD" (Dynamic System Display), which drove the console. Coding in DSD needed to be fast as it needed to continually redraw the screen quickly enough to avoid visible flicker. DSD displayed information about the system and the jobs in process. The console also included a keyboard through which the operator could enter requests to modify stored programs and display information about jobs in or awaiting execution. A full-screen editor, called O26 (after the IBM model 026 key punch, with the first character made alphabetic due to operating system restrictions), could be run on the operator console. This text editor appeared in 1967—which made it one of the first full-screen editors. (Unfortunately, it took CDC another 15 years to offer FSE, a full-screen editor for normal time-sharing users on CDCs Network Operating System.) There were also a variety of games that were written using the operator console. These included BAT (a baseball game), KAL (a kaleidoscope), DOG (Snoopy flying his doghouse across the screens), ADC (Andy Capp strutting across the screens), EYE (changed the screens into giant eyeballs, then winked them), PAC (a Pac-Man-like game), a lunar lander simulator, and more. Minimum configuration The minimum hardware requirements of a CDC 6000 series computer system consisted of the computer, including 32,768 words of central memory storage, any combination of disks, disk packs, or drums to provide 24 million characters of mass storage, a punched card reader, punched card punch, printer with controllers, and two 7-track magnetic tape units. Larger systems could be obtained by including optional equipment such as additional central memory, extended core storage (ECS), additional disk or drum units, card readers, punches, printers, and tape units. Graphic plotters and microfilm recorders were also available. Peripherals CDC 405 Card Reader - Unit reads 80-column cards at 1200 cards a minute and 51-column cards at 1600 cards per minute. Each tray holds 4000 cards to reduce the rate of required loading. CDC 6602/6612 Console Display CDC 6603 Disk System CDC 626 Magnetic Tape Transports CDC 6671 Communications Multiplexer - supported up to 16 synchronous data connections up to 4800 bps each for Remote Job Entry CDC 6676 Communications Multiplexer - supported up to 64 asynchronous data connections up to 300 bps each for timesharing access. CDC 6682/6683 Satellite Coupler CDC 6681 Data Channel Converter Versions The CDC 6600 was the flagship. The CDC 6400 was a slower, lower-performance CPU that cost significantly less. The CDC 6500 was a dual CPU 6400, with 2 CPUs but only 1 set of I/O PPs, designed for compute-bound problems.. The CDC 6700 was also a dual CPU machine, but had one 6600 CPU and one 6400 CPU. The CDC 6415 was an even cheaper and slower machine; it had a 6400 CPU but was available with only seven, eight, or nine PPUs instead of the normal ten. The CDC 6416 was an upgrade that could be added to a 6000 series machine; it added an extra PPU bank, giving a total of 20 PPU's & 24 channels, designed for significantly improved I/O performance. The 6600 The CDC 6600 was the flagship mainframe supercomputer of the 6000 series of computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed its fastest predecessor, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600, of which about 100 were sold, was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600. The CDC 6600 anticipated the RISC design philosophy and, unusually, employed a ones'-complement representation of integers. Its successors would continue the architectural tradition for more than 30 years until the late 1980s, and were the last chips designed with ones'-complement integers. The CDC 6600 was also the first widespread computer to include a Load–store architecture, with the writing to its address registers triggering memory load or store of data from its data registers. The first CDC 6600's were delivered in 1965 to the Livermore and Los Alamos National Labs (managed by the University of California). Serial #4 went to the Courant Institute of Mathematical Sciences Courant Institute at NYU in Greenwich Village, New York CIty. The first delivery outside the US went to CERN laboratory near Geneva, Switzerland, where it was used to analyse the two to three million photographs of bubble-chamber tracks that CERN experiments were producing every year. In 1966 another CDC 6600 was delivered to the Lawrence Radiation Laboratory, part of the University of California at Berkeley, where it was used for the analysis of nuclear events photographed inside the Alvarez bubble chamber. The University of Texas at Austin had one delivered for its Computer Science and Mathematics Departments, and installed underground on its main campus, tucked into a hillside with one side exposed, for cooling efficiency. A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The 6400 The CDC 6400, a member of the CDC 6000 series, was a mainframe computer made by Control Data Corporation in the 1960s. The central processing unit was architecturally compatible with the CDC 6600. In contrast to the 6600, which had 10 parallel functional units which could work on multiple instructions at the same time, the 6400 had a unified arithmetic unit, which could only work on a single instruction at a time. This resulted in a slower, lower-performance CPU, but one that cost significantly less. Memory, peripheral processor-based input/output (I/O), and peripherals were otherwise identical to the 6600. In 1966, the Computing Center () of the RWTH Aachen University acquired a CDC 6400, the first Control Data supercomputer in Germany and the second one in Europe after the European Organization for Nuclear Research (CERN). It served the entire university also by 64 remote-line teletypes (TTY) until it was replaced by a CDC Cyber 175 computer in 1976. Dual CPU systems The 6500 The CDC 6500, which features a dual CPU 6400, is the third supercomputer in the 6000 series manufactured by the Control Data Corporation and designed by supercomputer pioneer Seymour Cray. The first 6500 was announced in 1964 and was delivered in 1967. It includes twelve different independent computers. Ten are peripheral and control processors, each of which have a separate memory and can run programs separately from each other and the two 6400 central processors. Instead of being air-cooled, it has a liquid refrigeration system and each of the three bays of the computer has its own cooling unit. CDC 6500 systems were installed at: Michigan State University - bought in 1968, meant to replace its CDC 3600, and it was the only academic mainframe on campus. CERN - upgraded from a 6400 to a 6500 in April 1969. the technical lab at the Patrick Air Force Base in 1978. the Laboratory of Computing Techniques and Automation in the Joint Institute for Nuclear Research (USSR) - originally bought CDC 6200 in 1972, later upgraded to 6500, retired in 1995 The 6700 Composed of a 6600 and a 6400, the CDC 6700 was the most powerful of the 6000 series. See also CDC Cyber - contained the successors to the 6000 series computers Notes References CONTROL DATA 6400/6500/6600 Computer Systems Reference Manual, Publication No. 60100000 D, 1967 CONTROL DATA 6400/6500/6600/6700 Computer Systems, SCOPE 3.3 User's Guide, Publication No. 60252700 A, 1970 CONTROL DATA 6400/6500/6600/6700 Computer Systems, SCOPE Reference Manual, Publication No. 60305200, 1971 Computer history on CDC 6600 Gordon Bell on CDC computers External links Neil R. Lincoln with 18 Control Data Corporation (CDC) engineers on computer architecture and design, Charles Babbage Institute, University of Minnesota. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, and Seymour Cray. CONTROL DATA 6400/6500/6600 COMPUTER SYSTEMS Reference Manual 2016 GeekWire article Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead 2013 GeekWire article on the restoration of a CDC 6500 at the LCM. Request a login to the working CDC 6500 at Living Computers: Museum + Labs, one of the computers online at Paul Allen's collection of timesharing and interactive computers. 6000 series Supercomputers 60-bit computers 12-bit computers Control Data mainframe computers Transistorized computers Control Data Corporation Computer-related introductions in 1964
2753
https://en.wikipedia.org/wiki/AutoCAD
AutoCAD
AutoCAD is a commercial computer-aided design (CAD) and drafting software application. Developed and marketed by Autodesk, AutoCAD was first released in December 1982 as a desktop app running on microcomputers with internal graphics controllers. Before AutoCAD was introduced, most commercial CAD programs ran on mainframe computers or minicomputers, with each CAD operator (user) working at a separate graphics terminal. AutoCAD is also available as mobile and web apps. AutoCAD is used in industry, by architects, project managers, engineers, graphic designers, city planners and other professionals. It was supported by 750 training centers worldwide in 1994. Introduction AutoCAD was derived from a program that began in 1977, and then released in 1979 called Interact CAD, also referred to in early Autodesk documents as MicroCAD, which was written prior to Autodesk's (then Marinchip Software Partners) formation by Autodesk cofounder Michael Riddle. The first version by Autodesk was demonstrated at the 1982 Comdex and released that December. AutoCAD supported CP/M-80 computers. As Autodesk's flagship product, by March 1986 AutoCAD had become the most ubiquitous CAD program worldwide. The 2022 release marked the 36th major release of AutoCAD for Windows and the 12th consecutive year of AutoCAD for Mac. The native file format of AutoCAD is .dwg. This and, to a lesser extent, its interchange file format DXF, have become de facto, if proprietary, standards for CAD data interoperability, particularly for 2D drawing exchange. AutoCAD has included support for .dwf, a format developed and promoted by Autodesk, for publishing CAD data. File formats Filename extensions AutoCAD's native file formats are denoted either by a .dwg, .dwt, .dws, or .dxf filename extension. The primary file format for 2D and 3D drawing files created with AutoCAD is .dwg. While other third-party CAD software applications can create .dwg files, AutoCAD uniquely creates RealDWG files. Using AutoCAD, any .dwg file may be saved to a derivative format. These derivative formats include: Drawing Template Files .dwt: New .dwg are created from a .dwt file. Although the default template file is acad.dwt for AutoCAD and acadlt.dwt for AutoCAD LT, custom .dwt files may be created to include foundational configurations such as drawing units and layers. Drawing Standards File .dws: Using the CAD Standards feature of AutoCAD, a Drawing Standards File may be associated to any .dwg or .dwt file to enforce graphical standards. Drawing Interchange Format .dxf: The .dxf format is an ASCII representation of a .dwg file, and is used to transfer data between various applications. Features Compatibility with other software ESRI ArcMap 10 permits export as AutoCAD drawing files. Civil 3D permits export as AutoCAD objects and as LandXML. Third-party file converters exist for specific formats such as Bentley MX GENIO Extension, PISTE Extension (France), ISYBAU (Germany), OKSTRA and Microdrainage (UK); also, conversion of .pdf files is feasible, however, the accuracy of the results may be unpredictable or distorted. For example, jagged edges may appear. Several vendors provide online conversions for free such as Cometdocs. Language AutoCAD and AutoCAD LT are available for English, German, French, Italian, Spanish, Japanese, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian (also through additional language packs). The extent of localization varies from full translation of the product to documentation only. The AutoCAD command set is localized as a part of the software localization. Extensions AutoCAD supports a number of APIs for customization and automation. These include AutoLISP, Visual LISP, VBA, .NET and ObjectARX. ObjectARX is a C++ class library, which was also the base for: products extending AutoCAD functionality to specific fields creating products such as AutoCAD Architecture, AutoCAD Electrical, AutoCAD Civil 3D third-party AutoCAD-based application There are a large number of AutoCAD plugins (add-on applications) available on the application store Autodesk Exchange Apps. AutoCAD's DXF, drawing exchange format, allows importing and exporting drawing information. Vertical integration Autodesk has also developed a few vertical programs for discipline-specific enhancements such as: Advance Steel AutoCAD Architecture AutoCAD Electrical AutoCAD Map 3D AutoCAD Mechanical AutoCAD MEP AutoCAD Plant 3D Autodesk Civil 3D Since AutoCAD 2019 several verticals are included with AutoCAD subscription as Industry-Specific Toolset. For example, AutoCAD Architecture (formerly Architectural Desktop) permits architectural designers to draw 3D objects, such as walls, doors, and windows, with more intelligent data associated with them rather than simple objects, such as lines and circles. The data can be programmed to represent specific architectural products sold in the construction industry, or extracted into a data file for pricing, materials estimation, and other values related to the objects represented. Additional tools generate standard 2D drawings, such as elevations and sections, from a 3D architectural model. Similarly, Civil Design, Civil Design 3D, and Civil Design Professional support data-specific objects facilitating easy standard civil engineering calculations and representations. Softdesk Civil was developed as an AutoCAD add-on by a company in New Hampshire called Softdesk (originally DCA). Softdesk was acquired by Autodesk, and Civil became Land Development Desktop (LDD), later renamed Land Desktop. Civil 3D was later developed and Land Desktop was retired. Variants AutoCAD LT AutoCAD LT is the lower-cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. Priced at $495, it became the first AutoCAD product priced below $1000. It was sold directly by Autodesk and in computer stores unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers. AutoCAD LT 2015 introduced Desktop Subscription service from $360 per year; as of 2018, three subscription plans were available, from $50 a month to a 3-year, $1170 license. While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features: 3D capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing. Network licensing: AutoCAD LT cannot be used on multiple machines over a network. Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA. Management and automation capabilities with Sheet Set Manager and Action Recorder. CAD standards management tools. AutoCAD Mobile and AutoCAD Web AutoCAD Mobile and AutoCAD Web (formerly AutoCAD WS and AutoCAD 360) is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels, including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD Web uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome. AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android). In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support for unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection. In 2011, Autodesk announced plans to migrate the majority of its software to "the cloud", starting with the AutoCAD WS mobile application. According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS product manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space. In 2013, AutoCAD WS was renamed to AutoCAD 360. Later, it was renamed to AutoCAD Web App. Student versions AutoCAD is licensed, for free, to students, educators, and educational institutions, with a 12-month renewable license available. Licenses acquired before March 25, 2020 were a 36-month license, with its last renovation on March 24, 2020. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1 or AutoCAD 2019 and newer, the output includes a plot stamp/banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if they are imported in versions older than AutoCAD 2015 or newer than AutoCAD 2018. Ports Windows AutoCAD Release 12 in 1992 was the first version of the software to support the Windows platform - in that case Windows 3.1. After Release 14 in 1997, support for MS-DOS, Unix and Macintosh were dropped, and AutoCAD was exclusively Windows supported. In general any new AutoCAD version supports the current Windows version and some older ones. AutoCAD 2016 to 2020 support Windows 7 up to Windows 10. Mac Autodesk stopped supporting Apple's Macintosh computers in 1994. Over the next several years, no compatible versions for the Mac were released. In 2010 Autodesk announced that it would once again support Apple's Mac OS X software in the future. Most of the features found in the 2012 Windows version can be found in the 2012 Mac version. The main difference is the user interface and layout of the program. The interface is designed so that users who are already familiar with Apple's macOS software will find it similar to other Mac applications. Autodesk has also built-in various features in order to take full advantage of Apple's Trackpad capabilities as well as the full-screen mode in Apple's OS X Lion. AutoCAD 2012 for Mac supports both the editing and saving of files in DWG formatting that will allow the file to be compatible with other platforms besides the OS X. AutoCAD 2019 for Mac requires Mac OS X 10.11 (El Capitan) or later. AutoCAD LT 2013 was available through the Mac App Store for $899.99. The full-featured version of AutoCAD 2013 for Mac, however, wasn't available through the Mac App Store due to the price limit of $999 set by Apple. AutoCAD 2014 for Mac was available for purchase from Autodesk's web site for $4,195 and AutoCAD LT 2014 for Mac for $1,200, or from an Autodesk authorized reseller. The latest version available for Mac is AutoCAD 2022 as of January 2022. Version history See also Autodesk 3ds Max Autodesk Maya Autodesk Revit AutoShade AutoSketch Comparison of computer-aided design software Design Web Format Feature creep LibreCAD – cross-platform, free and open source 2D CAD FreeCAD – cross-platform, free and open source 3D CAD BRL-CAD – cross-platform, free and open source 3D CAD References Further reading External links Autodesk products 1982 software IRIX software Computer-aided design software IOS software Classic Mac OS software Android (operating system) software MacOS computer-aided design software Software that uses Qt
19424660
https://en.wikipedia.org/wiki/Organizational%20patterns
Organizational patterns
Organizational patterns are inspired in large part by the principles of the software pattern community, that in turn takes it cues from Christopher Alexander's work on patterns of the built world. Organizational patterns also have roots in Kroeber's classic anthropological texts on the patterns that underlie culture and society. They in turn have provided inspiration for the Agile software development movement, and for the creation of parts of Scrum and of Extreme Programming in particular. History An early explicit citation to patterns of social structure can be found in the anthropological literature. Patterns are those arrangements or systems of internal relationship which give to any culture its coherence or plan, and keep it from being a mere accumulation of random bits. They are therefore of primary importance. Kroeber speaks of universal patterns that describe some overall scheme common to all human culture; of systemic patterns are broad but normative forms relating to beliefs, behaviors, signs, and economics; and total culture patterns that are local. Kroeber notes that systemic patterns can pass from culture to culture: A second kind of pattern consists of a system or complex of cultural material that has proved its utility as a system and therefore tends to cohere and persist as a unit; it is modifiable only with difficulty as to its underlying plan. Any one such systemic pattern is limited primarily to one aspect of culture, such as subsistence, religion, or economics; but it is not limited areally, or to one particular culture; it can be diffused cross-culturally, from one people to another. . . . What distinguishes these systemic patterns of culture—or well-patterned systems, as they might also be called—is a specific interrelation of their component parts, a nexus that holds them together strongly, and tends to preserve the basic plan... As a result of the persistence of these systemic patterns, their significance becomes most evident on a historical view. The pattern aspect of Kroeber's view fits very well the systems-thinking pattern view of Christopher Alexander in the field of architecture. Alexander's books became an inspiration for the software world, and in particular for the object-oriented programming world, in about 1993. Organizational patterns in the sense they are recognized in the software community today first made an appearance at the original Hillside Group workshop that would lead to the pattern community and its PLoP conferences. The Hillside Group sent out a call for pattern papers and, in 1994, held the first pattern conference at Allerton Park in central Illinois in the United States. The second conference, also at Allerton, would follow a year later. These first two PLoP conferences witnessed a handful of organizational patterns: The RaPPEL pattern language (1995) by Bruce Whitenack that described organizational structures suitable to requirements acquisition; The Caterpillar's Fate pattern language (1995) by Norm Kerth that described organizational structures supporting evolution from analysis to design; A work by James Coplien (1995) describing several years of organizational research at Bell Laboratories; Episodes, a pattern language by Ward Cunningham (1996) describing key points of what today we would call Agile software development; A pattern language by Neil Harrison (1996) on the formation and function of teams. A flurry of associated publications and follow-up articles followed quickly thereafter, including an extemporization of the organizational patterns approach in the Bell Labs Technical Journal, an invited piece in ASE, a CACM article by Alistair Cockburn and, shortly thereafter, a pattern-laden book by Alistair, as well as chapters by Benualdi and Janoff in the Patterns Handbook. It was also about this time that Michael A. Beedle et al. published patterns that described explicit extensions to existing organizational patterns, for application in projects using a then five-year-old software development framework called Scrum. A few more articles, such as the one by Brash et al. also started to appear. Little more happened on the organizational patterns front until the publication of the book by Berczuk et all on configuration management patterns; this was a break-off effort from the effort originally centered at Bell Labs. In the meantime, Jim Coplien and Neil Harrison had been collecting organizational patterns and combining them into a collection of four pattern languages. Most of these patterns were based on the original research from Bell Laboratories, which studied over 120 organizations over the period of a decade. These empirical studies were based on subject role-play in software development organizations, reminiscent of the sociodramas of Moreno's original social network approach. However, the pattern language also had substantial input from other sources and in particular the works by Cockburn, Berczuk, and Cunningham. This collection was published as Organizational Patterns of Agile Software Development in 2004. One of the most recent organizational pattern articles comes from an early pattern contributor and advocate, the object design pioneer Grady Booch. Principles of discovery and use Like other patterns, organizational patterns aren't created or invented: they are discovered (or "mined") from empirical observation. The early work on organizational patterns at Bell Laboratories focused on extracting patterns from social network analysis. That research used empirical role-playing techniques to gather information about the structure of relationships in the subject organization. These structures were analyzed for recurring patterns across organization and their contribution to achieving organizational goals. The recurring successful structures were written up in pattern form to describe their tradeoffs and detailed design decisions (forces), the context in which they apply, along with a generic description of the solution. Patterns provide an incremental path to organizational improvement. The pattern style of building something (in this case, an organization) is: Find the weakest part of your organization Find a pattern that is likely to strengthen it Apply the pattern Measure the improvement or degradation If the pattern improved things, go to step 1 and find the next improvement; otherwise, undo the pattern and try an alternative. As with Alexander-style patterns of software architecture, organizational patterns can be organized into pattern languages: collections of patterns that build on each other. A pattern language can suggest the patterns to be applied for a known set of working patterns that are present. Organizational patterns, agile, and other work The history of Agile software development and of organizational patterns have been entwined since the beginning. Kent Beck was the shepherd (interactive pattern reviewer) of the Coplien paper for the 1995 PLoP, and he mentions the influence of this work on extreme programming in a 2003 publication. The idea of daily Scrum meetings in fact came from a draft of an article for Dr. Dobb's Journal that described the organizational patterns research on the Borland QPW project. Beedle's early work with Sutherland brought the pattern perspective more solidly into the history of Scrum. More recently, the Scrum community has taken up newfound interest in organizational patterns and there is joint research going forward between the two communities. In this vein, the first ScrumPLoP conference took place in Sweden in May, 2010, sanctioned by both the Scrum Alliance and the Hillside Group. References Patterns
18681965
https://en.wikipedia.org/wiki/AWR%20Corporation
AWR Corporation
AWR Corporation is an electronic design automation (EDA) software company, formerly known as Applied Wave Research, and then acquired by National Instruments The company develops, markets, sells and supports engineering software, which provides a computer-based environment for the design of hardware for wireless and high speed digital products. AWR software is used for radio frequency (RF), microwave and high frequency analog circuit and system design. Typical applications include cellular and satellite communications systems and defense electronics including radar, electronic warfare and guidance systems. AWR's product portfolio includes Microwave Office, Visual System Simulator (VSS), Analog Office, APLAC, AXIEM and Analyst. AWR's customers include companies involved in the design and development of analog and mixed signal semiconductors, wireless communications equipment, aerospace and defense systems. History The company was founded in 1994 by Joseph E. Pekarek (with Ted A. Miracco and Stephen A. Maas) and Paul Cameron, from Hughes Aircraft, in Fullerton, California. First established as Applied Wave Research, AWR was founded to improve the design efficiency for radio frequency and microwave circuit and system design. The vision of the company was to provide a modern object-oriented electronic design automation environment that could streamline high frequency electronic design by integrating schematic entry and layout; electromagnetic (EM) and circuit theory; and frequency and time-domain methods. Investors in AWR included CMEA Ventures, Intel Capital and Synopsys Inc. In 1998 the company demonstrated the Microwave Office software, which included EM, circuit simulation and schematic capture, at the International Microwave Symposium in Baltimore, Maryland. Inf 2010, AWR Corporation's major competitors included the EEsof EDA division of Agilent Technologies, Ansoft Corporation, and Cadence Design Systems. In 1999 AWR acquired ICUCOM Corporation, Troy, New York. Through this acquisition AWR acquired the communication systems simulation software called ACOLADE, for Advanced Communication Link Analysis and Design Environment. AWR re-engineered the software and evolved the technology and libraries into a new tool: Visual System Simulator (VSS), which was introduced in 2002. In September, 2005 AWR acquired APLAC Solutions, Oy, of Espoo, Finland. AWR acquired APLAC, which developed simulation and analysis software for analog and radio-frequency (RF) design. APLAC's RF design technology was used in mobile phone RF integrated circuits. In 2008 AWR acquired Simulation Technology and Applied Research (STAAR), in Mequon, Wisconsin. STAAR developed proprietary parallelized 3D FEM EM simulation and analysis capability, marketed as Analyst software. AWR was acquired by National Instruments in 2011 for about $58 million. In January 2020 Cadence Design Systems, Inc completed acquisition of AWR Corporation from National Instruments. Notes References External links AWR web site Electronic design automation companies
25191984
https://en.wikipedia.org/wiki/Georg%20von%20Krogh
Georg von Krogh
Georg von Krogh (born 24 May 1963) is a Norwegian organizational theorist and Professor at the Swiss Federal Institute of Technology in Zurich (ETH Zurich) and holds the Chair of Strategic Management and Innovation. He also serves on Strategy Commission at ETH Zurich. Biography Born in Oslo, Norway to a family of Danish nobility, Von Krogh received his MSc from the Norwegian University of Technology and Natural Science, and a Ph.D. from this University's Department of Industrial Economics and Technology Management. Von Krogh started his academic career as Assistant Professor of Business Policy at SDA Bocconi, Bocconi University in Italy. Subsequently, he was Associate Professor of Strategy at the Norwegian School of Management, and Professor of Management at the University of St. Gallen in Switzerland, and a Director of this University's Institute of Management. He was also the President of the Research Commission at the University of St. Gallen. He has been Visiting Professor at MIT's Sloan School of Management, Hitotsubashi University in Japan, Japan Advanced Institute of Science and Technology, and the London School of Economics and Political Science. He was the Head of the Department of Management, Technology, and Economics at the ETH Zurich during 2008-2011, and served on the National Research Council of the Swiss National Science Foundation (SNF) during 2012-2020. From 2008 to 2014 he was a board member of the European Academy of Management (EURAM). Currently, von Krogh serves as Chairman of the International Advisory Board at the Research Council of Norway. He holds an honorary position as Research Fellow at Judge Business School, University of Cambridge, and is an Affiliated Scholar with the Laboratory for Innovation Science (LISH) at Harvard University. He also serves as a member of the Scientific Council at EURAM, and is a member of the Advisory Board at the School of Management, Politecnico di Milano. Work Von Krogh specializes in competitive strategy, technological innovation, and knowledge management. He has conducted research in several industries including financial services, media, computer software and hardware, life-sciences, and consumer goods. He teaches courses on Entrepreneurial Leadership, Strategic Management, and Innovation Theory and Research. von Krogh has consulted on strategy and trained executives for companies in Asia, Europe, and the USA. He has experience from being a board or advisory board member of various companies and NGOs, including PwC in Switzerland, Swiss Bank Corporation (UBS), SKAT Foundation, and the World Web Forum. He also serves as a member of the Chapter Board at the Swiss-American Chamber of Commerce. He was an academic fellow and faculty member of the World Economic Forum (2002-2007) where he was actively involved in scenario development for industries and economies. von Krogh serves as Editorial Board member of various journals which include Academy of Management Journal, Journal of Strategic Information Systems, European Management Review, European Management Journal, MIT Sloan Management Review, California Management Review, and Long Range Planning. He is an Associate Editor at Academy of Management Discoveries (2nd term), and previously served as Senior Editor at Organisation Studies. His awards and recognitions include for example the Association of American Publishers' "Best Professional Business Book Award", Harvard Business Review's "Breakthrough Idea," ETH's Teaching Award "Goldene Eule," European Management Review's "Best Paper Award 2012," and Journal of Strategic Information Systems' "Best Paper Award 2008." Selected publications Shrestha, Y. R., He, V. F., Puranam, P. and von Krogh, G., (Forthcoming) Algorithm Supported Induction for Building Theory: How Can We Use Prediction Models to Theorize? Organization Science von Krogh, G., Kucukkeles, B., & Ben-Menahem, S. (forthcoming) “Leave no stone unturned:” What the COVID-19 Pandemic can teach us about ultrafast innovation. MIT Sloan Management Review He. F., Puranam P., Shrestha Y. R., & von Krogh, G. (forthcoming) Resolving governance disputes in communities: A study of software license decisions. Strategic Management Journal. Ben-Menahem, S., Nistor-Gallo, R., Macia, G., von Krogh, G., & Goldhahn, J. (2020) How the new European regulation on medical devices will affect innovation. Nature Biomedical Engineering. Sirén, C., He, F., Wesemann, H., Jonassen, Z., Grichnik, D., & von Krogh, G. (forthcoming) Leader emergence in nascent venture teams: The critical roles of individual emotion regulation and team emotions. Journal of Management Studies. Gersdorf, T., He, V. F., Schlesinger, A., Koch, G., Ehrismann, D., Widmer, H., & von Krogh, G. (2019). Demystifying industry-academia collaboration. Nature Reviews Drug Discovery, 019-00001-2. von Krogh, G., Netland, T., & Wörter, M. (2018). Winning With Open Process Innovation. MIT Sloan Management Review, 59(2), 53-56. He, F., Sirén, C., Singh, S., Solomon, G. T., & von Krogh, G. (2017). Keep calm and carry on: Emotion regulation in entrepreneurs’ learning from failure. Entrepreneurship Theory and Practice.   Trantopoulos, K., von Krogh, G., Wallin, M., & Wörter, M. (2017). External Knowledge and Information Technology: Implications for Process Innovation Performance. MIS Quarterly, 41(1), 287-300. Faraj, S., von Krogh, G., Monteiro, E., & Lakhani, K. (2016). Special Section Introduction - Online Community as Space for Knowledge Flows. Information Systems Research, 27(4), 668-684. Ben-Menahem, S., von Krogh, G., Erden, Z., & Schneider, A. (2016). Coordinating Knowledge Creation in Multidisciplinary Teams: evidence from Early-stage drug Discovery. Academy of Management Journal, 59(4), 1308-1338. von Hippel, E., & von Krogh, G. (2016). Identifying Viable ‘Need-Solution Pairs’: Problem Solving Without Problem Formulation. Organization Science, 27(1), 207-221. Spaeth, S., von Krogh, G., & He, F. (2015). Perceived Firm Attributes and Intrinsic Motivation in Sponsored Open Source Software Projects. Information Systems Research, 26(1), 224-237. Hacklin, F., Battistini, B., & von Krogh, G. (2013). Strategic choices in converging industries. MIT Sloan Management Review, 55(1), 65-73. Garriga, H., von Krogh, G., & Spaeth, S. (2013). How constraints and knowledge impact open innovation. Strategic Management Journal, 34(9), 1134-1144. von Krogh, G., Nonaka, I., & Rechsteiner, L. (2012). Leadership in organizational knowledge creation: A review and framework. Journal of Management Studies, 49(1), 240-277. von Krogh, G., Haefliger, S., Spaeth, S., & Wallin, M. W. (2012). Carrots and rainbows: Motivation and social practice in open source software development. MIS Quarterly, 36(2), 649-676. von Krogh, G., Battistini, B., Pachidou, F., & Baschera, P. (2012). The changing face of corporate venturing in biotechnology. Nature Biotechnology, 30(10), 911-915. Haefliger, S., Jäger, P., & von Krogh, G. (2010). Under the radar: Industry entry by user entrepreneurs. Research Policy, 39(9), 1198-1213. Gächter, S., von Krogh, G., & Haefliger, S. (2010). Initiating private-collective innovation: The fragility of knowledge sharing. Research Policy, 39(7), 893-906. Raisch, S., & von Krogh, G. (2009). Focus intensely on a few great innovation ideas. Harvard Business Review, 87(10), 32. Nonaka, I., & von Krogh, G. (2009). Tacit knowledge and knowledge conversion: Controversy and advancement in organizational knowledge creation theory. Organization Science, 20(3), 635-652. Haefliger, S., von Krogh, G., & Spaeth, S. (2008). Code reuse in open source software. Management Science, 54(1), 180-193. Maillart, T., Sornette, D., Spaeth, S., & von Krogh, G. (2008). Empirical tests of Zipf’s law mechanism in open source Linux distribution. Physical Review Letters, 101(21), 218701-1--218701-4. von Krogh, G., & Haefliger, S. (2007). Nurturing respect for IP in China. Harvard Business Review, 85(4), 23-24. Raisch, S., & von Krogh, G. (2007). Navigating a path to smart growth. MIT Sloan Management Review, 48(3), 65-72. von Krogh, G. (2006). Customers demand their slice of IP. Harvard Business Review, 84, 45-46. von Krogh, G., & von Hippel, E. (2006). The promise of research on open source software. Management Science, 52(7), 975-983. Nonaka, I., von Krogh, G., & Voelpel, S. (2006). Organizational knowledge creation theory: Evolutionary paths and future advances. Organization Studies, 27(8), 1179-1208. von Hippel, E., & von Krogh, G. (2003). Open source software and the “private-collective” innovation model: Issues for organization science. Organization Science, 14(2), 209-223. von Krogh, G., Spaeth, S., & Lakhani, K. R. (2003). Community, joining, and specialization in open source software innovation: a case study. Research Policy, 32(7), 1217-1241. von Krogh, G. (2003). Networks of Innovation: Change and meaning in the age of the Internet. Research Policy, 32, 1714-1716. von Krogh, G., & Von Hippel, E. (2003). Special issue on open source software development. Research Policy, 32(7), 1149-1157. von Krogh, G., & Cusumano, M. A. (2001). Three strategies for managing fast growth. MIT Sloan Management Review, 42(2), 53-62. von Krogh, G., & Roos, J. (1996). A Tale of the unfinished. Strategic Management Journal (1986-1998), 17(9), 729-739. von Krogh, G., Roos, J., & Slocum, K. (1994). An essay on corporate epistemology. Strategic Management Journal, 15(S2), 53-73. References External links Georg von Krogh at ETH Zurich Books by Georg von Krogh Articles in Referred Journals by Georg von Krogh Book Chapters and (not referred) Articles by Georg von Krogh Conference Publications by Georg von Krogh Ceo Magazine interview on diversity in firms 1963 births Living people ETH Zurich faculty Academics from Oslo Norwegian University of Science and Technology alumni University of St. Gallen faculty Bocconi University faculty BI Norwegian Business School faculty Danish nobility Von Krogh family
7561584
https://en.wikipedia.org/wiki/The%20New%20College%2C%20Chennai
The New College, Chennai
The New College () is an institution of higher education in Chennai, Tamil Nadu. Established in 1951, the institution is one of the affiliated colleges of the University of Madras, with autonomous status. The college was established by the Muslim Educational Association of Southern India (MEASI) to meet the educational requirements of the Muslim students in South India. The maximum 90% of strength in college is Muslim students. Location The college is located in the Center of Chennai on a campus. It is one of the few colleges in Chennai located within the city hub. The older buildings are notable for their Indo-Saracenic architecture which stand in contrast to the new educational blocks. The older buildings are being renovated. The same campus also houses New College Institute Of Management, MEASI Academy Of Architecture And Institute Of Research In Soil Biology and Bio-Technology, MEASI Institute Of Information Technology And MEASI CA Academy. The New College was founded on 2 July 1951 by the Muslim Educational Association of Southern India (MEASI), the college is an institution of higher education offering instruction in more than twenty courses in humanities, science and commerce at undergraduate level and more than twelve courses at postgraduate level. Though primarily established for providing higher education to Muslim students, its doors are open to students from all communities. Deeniyath classes for Muslim students and moral instruction classes for others, are conducted during working hours. Working hours are 08.30 am to 01.30 pm for shift I and 02.15 pm to 06.40 pm for shift II Courses The college offers 23 Courses In Humanities, Sciences And Commerce At The Undergraduate Level And 10 Courses At The Postgraduate Level. Shift I (08:30 AM To 01:30 PM) Undergraduate - Aided B.A. Arabic B.A. Economics B.A. English B.A. History B.A. Sociology B.Sc. Mathematics B.Sc. Physics B.Sc. Chemistry B.Sc. Plant Biology & Plant Biotechnology B.Sc. Advanced Zoology & Biotechnology B.Com. General B.Com. Corporate Secretaryship Undergraduate - Self-Financing B.Sc. Computer Science Postgraduate - Aided M.A. Arabic M.A. Tamil M.A. English M.Sc. Chemistry M.Sc. Zoology M.Com. Research Courses Part Time M.Phil. / Ph.D. Arabic M.Phil. / Ph.D. Tamil M.Phil. / Ph.D. Chemistry Ph.D. English Ph.D. Economics Ph.D. Zoology Ph.D. Physics Ph.D. Commerce Full Time M.Phil. English M.Phil. / Ph.D. Tamil M.Phil. / Ph.D. Arabic M.Phil. / Ph.D. Economics M.Phil. / Ph.D. Chemistry M.Phil. / Ph.D. Zoology M.Phil. / Ph.D. Commerce Shift II (02:00PM To 06:25 PM) Undergraduate - Self Financing B.A. Urdu B.Sc. Biotechnology B.Sc. Computer Science B.B.A. B.C.A. B.Com. General B.Com. Corporate Secretaryship B.Com. Bank Management B.Com. Information Systems Management Postgraduate - Self-Financing M.A. History M.Sc. Computer Science M.Sc. Physics M.Sc. Mathematics M.Sc. Botany M.Com. Corporate Secretaryship Extracurricular Student volunteer organisations include the National Cadet Corps, the National Service Scheme, Red Cross Youth and a Rotaract club. A special and unique corporate social responsibility club is running exclusively and successfully by students of B.Com. Corporate Secretaryship by funding among the students The NCC has two divisions in the college - the Armoured Squadron and the Battalion. Armoured In "1 TN Armd Sqn NCC" students see and drive armoured tanks, and learn about rifles. The squadron is led by Captain. M. Anees Ahmed, a senior professor in the Department of Physics. Battalion "2 Company 1 Tamil Nadu Battalion NCC" deals with rifles. The cadets learn rifles and signals. The battalion is led by H. Zahid Hussain who is a senior professor in the Economics Department. Every year he creates 160 cadets. Campus culture The campus is cosmopolitan with students from India and abroad, especially from Islamic nations such as Saudi Arabia, Oman, Sudan, Malaysia and Bangladesh. The college represents numerous sports activities all over the city. The college conducts elections to select Students Chairman and General Secretary. MEASI Institute of Management In 1987, the Muslim Educational Association of Southern India established the MEASI Institute of Management to give training in management with emphasis on practical application suited to Indian environmental and management requirements. It is one of the few institutions in South India that offered an MBA Programme in 1987. The IM is managed by a board of trustees consisting of: M. Mohammed Hashim, Chairman H. M. Shamsudeen, Executive Director A. Mohamed Ashraf, Finance Director M. Avais Musvee, Member M. Mohamed Shameem, Member Abdul Jabbar Suhail, Member K. Shahid Mansoor, Member Irsad Ahmed Mecca, Member M. Mohamed Hashim, Invitee U. Mohamed Khalilullah, Invitee A. K. Abdulla, Invitee Noman H. Millwala, Invitee C. Abdul Malick, Invitee Imtiaz Ahamed, Invitee Dr. Major Zahid Husain, Ex Officio Member N. Balasubramanian, Director MEASI IM has air-conditioned classrooms, a boardroom, staff rooms, offices, and a mini-auditorium. The course offered is M.B.A (full-time and part-time). It is approved by AICTE, affiliated to the University of Madras, accredited by the National Board of Accreditation, and ISO 9001-2000 certified. MEASI IM has produced gold medals and ranks both in the full-time and part-time MBA programs. It has produced 100% results almost every year. Students of the institute have taken part in inter-collegiate management meets conducted by business-schools and have brought laurels. More than 75% of the students have been placed in companies through the placement cell of the institute, every year. MEASI Institute of Management was ranked consistently in various magazines such as Times of India, India today's best business school, Careers 360, Silicon India, Outlook India and Dun & Bradstreet B school surveys. MEASI Institute of Information Technology MEASI Institute Of Information Technology is an independent institute in Chennai, offering an MCA programme. It was founded in 2002 by the Muslim Educational Association of Southern India (MEASI). The association was registered under the societies act XXI of 1860. The MEASI ITT was established to run postgraduate courses in computer science and information technology, and approved by the All India Council for Technical Education (AICTE), New Delhi and affiliated to the University of Madras, Chennai. MEASI Academy of Architecture MEASI Academy of Architecture was established in 1999 by the Muslim Educational Association of Southern India (MEASI). MEASI AA is located in the heart of city, inside the New college campus. It consists of studios and classrooms. The Board of Trustees is: Chairman - K. Ameenur Rahman Hon. Secretary - H. M. Shamsudeen Hon. Treasurer - A. Mohammed Ashraf Trustees - A. Mohamed Haris T. P. Imbichammad S. Ziaudeen Ahmed H. C. Abdul Azeez Courses are five year B.Arch, five year B.Arch (Interior Design), and two year postgraduate M.Arch (Real Estate Development). The course is headed by Prof. N. Altaf Ahmed. It is accredited by the National Board of Accreditation and ISO 9001-2000 certified. The Alumni Association of MEASI Academy of Architecture was formed with ten batches of Alumnus on 13 September 2013, supporting the growth and development of the institution. New College Managing Committee Chairman - His Highness Nawab Mohammed Ali Azimjah, Honorary Secretary and Correspondent - Janab T.Rafeq Ahmed Treasurer Janab.Elias Sait Members Janab Janab Alhaj M. Mohamed Hasim S. M. Hamid Makkal Pavalar Inkulab (orator and revolutionary Tamil writer). Notable alumni C. S. Karnan, former judge Woorkeri Raman, cricketer K. R. Periyakaruppan, former Minister for Tamil Nadu Hindu Religious and Charitable Endowments G. K. Vasan, Indian politician Radha Ravi, actor of Tamil cinema, politician Sarath Kumar, actor of Tamil cinema, politician Karthik, actor of Tamil cinema Jaishankar, Tamil actor Vineeth, actor of Tamil and Malayalam cinema Nagendra Prasad, film actor Chinni Jayanth, comedian of Tamil cinema, mimicry artist M. S. Guhan, film producer T. R. Baalu, former Central Shipping Minister Bharath, Tamil cinema actor Veera Bahu, Tamil Cinema Actor Prasanna Pandian Shiva Rajkumar, Kannada actor Puneeth Rajkumar, Kannada actor Nelson Dilipkumar , film director References External links New College Educational institutions established in 1940 Arts and Science colleges in Chennai The New College, Chennai 1940 establishments in India
26385818
https://en.wikipedia.org/wiki/UHY%20Hacker%20Young
UHY Hacker Young
The UHY Hacker Young Group is a Top 20 Group of UK chartered accountants, with over 100 partners and 540 professional staff operating from 19 offices across the UK. Established in London in 1925, the group is also a founder member of UHY International, an international network of accountancy firms with 325 offices located in 100 countries across the globe. Total fee income for the fiscal year 2018 was £52.1 million, making UHY Hacker Young the 19th largest accountancy network in the UK. Since its foundation, the firm has offered a broad range of services for owners of family businesses, including bookkeeping and auditing, tax compliance and advice, corporate recovery and company secretarial services. UHY Hacker Young has also helped companies float on the Alternative Investment Market (AIM) since its inception in 1995, providing them with advice on pre-IPO fund-raising, admissions, due diligence and tax assistance and planning. In more recent years, the London office has launched international business desks for Malaysia, Russia & CIS, India, Germany, the US, the Persian Gulf region, Nordic countries and Israel. Services UHY Hacker Young is a professional services firm, thus covers a wide range of sectors: audit, business advisory & accounting, cloud accounting, corporate finance, corporate tax, FCA compliance, private client services, services for international businesses, tax investigations, enquiries & disclosures, turnaround & recovery and VAT. Name and branding UHY Hacker Young Group offices are known under the following titles: UHY Hacker Young - London, Nottingham, Manchester, Abergavenny, Ashford, Birmingham, Brighton & Hove, Bristol, Broadstairs, Cambridge, Chester, Letchworth Garden City, Newport, Royston, Huntingdon, Sittingbourne, Sheffield & Winchester UHY Calvert Smith - York UHY Hacker Young Fitch - Belfast Notable clients Andrew Andronikou, Michael Kiely & Peter Kubik of UHY Hacker Young were appointed administrators of Portsmouth Football Club when it went into administration on Friday 26 February 2010. References External links Official website UHY Hacker Young Early Careers website UHY International website UHY Cloud Accounting website Accounting firms of the United Kingdom
14122974
https://en.wikipedia.org/wiki/Backup%20Express
Backup Express
Catalogic DPX (formerly BEX or Backup Express) is an enterprise-level data protection tool that backs up and restores data and applications for a variety of operating systems. It has data protection, disaster recovery and business continuity planning capabilities. Catalogic DPX protects physical or virtual servers including VMWare and vSphere, supports many database applications, including Oracle, SQL, SharePoint, and Exchange. DPX supports agent-based or agent less backups. Users can map to and use a backed up version of the database if something goes wrong with the primary version. DPX is managed from a single console and catalog. This allows for centralized control of both tape-based and disk-based data protection jobs across heterogeneous operating systems. DPX can protect data centers, remote sites and supports recovery from DR. DPX can protect data to disk, tape or cloud. It is used for various recovery use cases including file, application, BMR, VM or DR. DPX can spin up VMs from backup images, recover physical servers, bring up applications online from snapshot based backups, it can be used to recover from Ransomware. Supported applications According to the DPX Interface Guide, DPX contains interfaces to the following software and database management systems: DB2, Lotus Notes, Microsoft Exchange, Novell Groupwise, Micro Focus Open Enterprise Services (OES), Oracle, SAP, SharePoint, SQL-BackTrack, SQL Server, and Sybase. For Oracle, DPX provides cloning capabilities. For Microsoft Exchange, SQL Server, and Oracle, DPX provides read/write access to recovery points. SQL log truncation, Rollback in time, point in time recovery for SQL, granular recovery of Exchange, SharePoint and SQL is available. Storage media DPX uses tape, tape library (jukebox), virtual tape library (VTL), disk (local, NetApp, Dell EMC, HP, Data Domain), disk-to-disk-to-tape, cloud as a backup target. DPX will back up to any disk with vStor or NetApp FAS storage. Structure DPX infrastructure has 3 types of components: Master Server Controls all backup management tasks, catalog, scheduling, job execution, and distributed processing. Device Server/Advanced Server/Open Storage Server/vStor Server Handles backup media – either tape or disk. Client Node Any computer in a DPX enterprise from which data is backed up is considered a DPX client node. Catalogic Software history In October 2013, Syncsort sold its data protection business to an investor group led by Bedford Venture Partners and Windcrest Partners. The spun off Data Protection business is now called Catalogic Software, the company that produces Catalogic DPX. On June 4, 2018, it was reported that Catalogic Software had taken an equity stake in the European data protection company Storware. See also List of backup software References Syncsort Data Protection is Now Catalogic Software, October 2014* Catalogic DPX 4.7.0 January 2021* Backup software
113744
https://en.wikipedia.org/wiki/Quattro%20Pro
Quattro Pro
Quattro Pro is a spreadsheet program developed by Borland and now sold by Corel, most often as part of Corel's WordPerfect Office suite. Characteristics Historically, Quattro Pro used keyboard commands close to those of Lotus 1-2-3. While it is commonly said to have been the first program to use tabbed sheets, Boeing Calc actually utilized tabbed sheets earlier. It currently runs under the Windows operating system. For years Quattro Pro had a comparative advantage, in regard to maximum row and column limits (allowing a maximum worksheet size of one million rows by 18,276 columns). This avoided the 65,536 row by 256 column spreadsheet limitations inherent to Microsoft Excel (prior to Excel 2007). Even with the maximum row advantage, Quattro Pro has been a distant second to Excel, in terms of sales numbers, since approximately 1996 to the present. When version 1.0 was in development, it was codenamed "Buddha" since it was meant to "assume the Lotus position", #1 in the market. When the product was launched in 1988, its original name, suggested to Mr. Kahn by Senior VP, Spencer Leyton at a Vietnamese restaurant in Santa Cruz, was Quattro (the Italian word for "four", a play on being one step ahead of "1-2-3"). Borland changed the name to Quattro Pro for its 1990 release. The common file extension of Quattro Pro spreadsheet file is .qpw, which it has used since version 9. Quattro Pro versions 7 and 8 used .wb3, version 6 used .wb2, version 5 used .wb1, and DOS versions used .wq2 and .wq1. Origins The original Borland Quattro electronic spreadsheet was a DOS program, the initial development of which was done by three Eastern Europeans, one of whom, the Hungarian Lajos Frank, was later hired by Microsoft. An article appeared in PC Week in 1985, quoting a maker of spreadsheet templates saying that he was in close contact with Borland, and that Borland was developing a spreadsheet. At the time, there was absolutely no such development being undertaken by Borland. After they both read the article, Philippe Kahn and Spencer Leyton had a casual conversation where they joked, half seriously, about perhaps developing a spreadsheet to compete with Lotus Development's 1-2-3. That led to Mr. Kahn setting an appointment with an agent for some Eastern European software developers, Robert Stein of Andromeda Software, which was also involved with the game Tetris. That led to an agreement negotiated by Mr. Leyton and Mr. Stein, providing for the development of the original Quattro. Quattro was written in assembly language and Turbo C, principally by Adam Bosworth, Lajos Frank, and Chuck Batterman. It was praised mainly for superior graphics on DOS. Borland acquired a replacement product called "Surpass", written in Modula-2. The main designers and programmers of Surpass were also hired by Borland to turn Surpass into Quattro Pro: Bob Warfield, Dave Anderson, Weikuo Liaw, Bob Richardson and Tod Landis. They joined other Borland programmers including Chuck Batterman, Lajos Frank, Tanj Bennett, Rich Reppert and Roger Schlafly. Bob Warfield later became Vice President of R&D at Borland. All eventually left Borland. Quattro Pro shipped in the final quarter of 1989. The Borland main office was near the epicenter of the Loma Prieta earthquake and the building was severely damaged when large and heavy air conditioners on the roof of Borland's main building were thrown upward by the quake, and came crashing down upon the glulam beams running across the top of the building. The beams were damaged to the point where they required injections of epoxy in order to make them sturdy enough to support the building again. In addition, the sprinkler system was triggered. The building was closed for months. All the computers were removed, placed on the tennis courts, washed down (acoustic ceilings rained gray mush onto everything when the sprinklers ran) and dried with hair dryers. Those that booted up were put to work. Quattro Pro finished final quality assurance testing and was sent to manufacturing from those computers running on the tennis courts in the (fortunately) sunny and dry autumn weather. Lawsuit Some have claimed that Quattro Pro was the first to use the tabbed notebook metaphor, but another spreadsheet, Boeing Calc, used tabs to multiple sheets, and allowed three-dimensional references before Quattro Pro was on the market. (Boeing Calc was so slow that its multiple sheet capabilities were barely usable.) Quattro Pro was the subject of a major lawsuit by Lotus against Borland. Lotus argued that Quattro could not copy Lotus 1-2-3's menus. Borland supplied the 1-2-3 menus as an alternative because keystroke compatibility was needed in order to run macros in 1-2-3 worksheets. Borland argued that most cars operate the same, but they are not necessarily made the same. So, Lotus could not rationally "own" the way its program behaved. The district court ruled in favor of Lotus, but the appellate court ruled that the 1-2-3 menus were functional and not copyrightable. The case went all the way to the U.S. Supreme Court which split 4 to 4 (Justice Stevens recused himself). This left the lower court ruling intact, which was a victory for Borland. However, the broader issue of whether a company can own and protect the way its program behaves remained unresolved. By the time the case was resolved, Borland no longer owned Quattro Pro. Borland sold the spreadsheet to Novell six months before the final decision was handed down. Quattro Pro for Windows Quattro Pro began as a DOS program (like Lotus 1-2-3) but with the growing popularity of Windows from Microsoft, a Windows version of Quattro needed to be written. There was almost nothing from the DOS code that could be moved to the Windows project, so the Quattro Pro for Windows (QPW) was written from scratch. Both the QPW and Paradox for Windows codebases (the latter being another Borland database application) were based on Borland's internal pilot project with object oriented UI code for Windows. This project ran simultaneously with the Borland language group investigating the desirability of a C++ compiler, and the company decided to make a bet on C++. However, the C++ compiler was not ready at first, and OO code for both projects was started in C with OO emulation through macros. As the Borland Turbo C++ compiler became available internally the projects converted to using C++. Charlie Anderson was put in charge of the project and he soon had Istvan Cseri, Weikuo Liaw, Murray Low, Steven Boye, Barry Spencer, Alan Bush, Dave Orton, Bernie Vachon, Anson Lee, Tod Landis, Gordon Ko and Chuck Batterman working on the project. Other engineers joined later. Eventually the team numbered nearly 20. The object model was inspired by the NeXT object model, modified by Mr. Cseri. Mr. Liaw and Mr. Spencer were in charge of the spreadsheet engine (written in assembly language) while Mr. Low wrote a large chunk of the UI. The product was internally codenamed "Thor" for the Norse god of Thunder. QPW featured two major innovations. First, it was the first Windows spreadsheet with multiple pages with cells that could be linked together seamlessly, a feature from Quattro Pro which QPW extended. Second, it was the first released Windows program to have an attribute menu (or property pane) available by right-clicking on the object. Although this idea was first seen on the Xerox Alto, the idea had not been implemented on a major Windows program. Paradox for Windows shared this feature, and it was shown off by Phillipe Kahn at a Paradox user conference over a year before QPW was released. Both these ideas became widespread in the software industry. QPW was one of the first big applications written in C++ on Windows, and it pushed the Borland C++ compiler to the limit. One reason why the Borland C++ Compiler was so good was that it had to compile and link the massive QPW code base successfully. The technical risk of the QPW project was immense. The object model was untried and might not have worked for a spreadsheet. The user interface (UI) was new (for Windows programs at least). No one knew if the C++ compiler could generate fast enough code. As it turned out, the program worked. It was fast, it was close in feature set to Lotus 123 and Excel, and the "right-click for properties" user design was reasonably understandable. At one point, it was hoped that QPW and Paradox for Windows would be able to share a common object model. That proved impossible despite serious thought and design efforts. QPW was finally released in September 1992. The Quattro Pro marketing team had chosen to bundle both Quattro Pro for DOS and Quattro Pro for Windows in the same box labeled "WIN-DOS" at a price of $495. Customers and reviewers expecting a pure Windows application responded with confusion and outrage, believing the product was merely a DOS application with windowing capabilities. Shortly thereafter QPW was re-packaged by itself and priced $129, receiving accolades for Borland's long-delayed pure Windows spreadsheet and its popular price. Eventually it sold well (after the price was slashed to just $49 a copy). Work was started immediately on a new version with a brand new team of engineers led by Joe Ammirato; including Bret Gillis and Peter Weyzen. Borland purchased DataPivot from Brio Technology to add a new feature to the program. Colin Glassey came from Brio to help with the integration of that technology. After a year and the merging of the old team and the new team, QPW 5 was released (the reason for the jump in version number had to do with keeping up with the DOS version as well as it looked good). QPW 5 sold well also, though the Microsoft Excel + Word combination was gaining steam. Work then started on version 6 (now with Steven Boye as project lead). Midway through the development of version 6 a strategic decision to work closely with the WordPerfect word processor was made. It was a direct attempt to push back at the Microsoft Office one-two punch of Microsoft Word and Microsoft Excel. The other big issue with Version 6 was the advent of Windows 95. This was a significant modification to the Windows operating system with a major change to user interface guidelines. In an odd set of events, Novell purchased both WordPerfect Corporation and the Quattro Pro code base and team of engineers from Borland. Novell was going to try to be a real competitor to Microsoft. Although Version 6 was released and some effort was made to unify the user interface between WordPerfect and QPW, the effort was far from complete. In another lawsuit, Novell claims that Microsoft had "deliberately targeted and destroyed" its WordPerfect and QuattroPro programs to protect its Windows operating system monopoly. The US Supreme Court refused to halt the antitrust lawsuit in March 2008. Novell's exit The release of Windows 95 in August 1995 was the beginning of the end for Novell and its plans to compete with Microsoft. Not only did Microsoft release a new operating system, but Microsoft also released new versions of Word and Excel to accompany it. Sales of Novell PerfectOffice (and Lotus applications as well) sank to almost nothing while sales of the Microsoft products were huge. Within three months, Novell announced they were going to sell their applications to someone (eventually that proved to be Corel). By mid-1996, Microsoft had 95% of the market for business applications. , Microsoft still dominates the market for Windows business application software, although Quattro Pro and WordPerfect, which pre-dated the MS Office 4.2 suite, are still both updated and sold. File formats Quattro Pro file formats use various filename extensions, including WB1, WB2, WB3, wq1 and wq2; some of these (WB2, wq1, and wq2) may open in the desktop applications of Collabora Online, LibreOffice or Apache OpenOffice and then be saved into the OpenDocument format or other file formats, Microsoft dropped support for Quattro Pro file formats after Office 2007. Version history 1989: Quattro Pro 1.0 1991: Quattro Pro 2.0 1992: Quattro Pro for Windows 1.0 - (also in Borland Office for Windows (1993)) 1993: Quattro Pro for Windows 5.0 - (also in Borland Office 2.0 for Windows) 1994: Quattro Pro for Windows 6.0 (part of Novell PerfectOffice 3.0) 1996: Quattro Pro 7 (part of Corel WordPerfect Suite 7) 1998: Quattro Pro 8 (part of Corel WordPerfect Suite 8) 2000: Quattro Pro 9 (part of Corel WordPerfect Office 2000) 2002: Quattro Pro 10 (part of Corel WordPerfect Office 2002) 2003: Quattro Pro 11 (part of Corel WordPerfect Office 11) 2004: Quattro Pro 12 (part of Corel WordPerfect Office 12) 2006: Quattro Pro X3 (part of Corel WordPerfect Office X3) 2008: Quattro Pro X4 (part of Corel WordPerfect Office X4) 2010: Quattro Pro X5 (part of Corel WordPerfect Office X5) 2012: Quattro Pro X6 (part of Corel WordPerfect Office X6) 2014: Quattro Pro X7 (part of Corel WordPerfect Office X7) 2016: Quattro Pro X8 (part of Corel WordPerfect Office X8) 2018: Quattro Pro X9 (part of Corel WordPerfect Office X9) 2020: Quattro Pro 2020 (part of Corel WordPerfect Office 2020) See also More history of Surpass (pre-Quattro Pro code base) Comparison of spreadsheet software Office Open XML software References Spreadsheet software Borland software DOS software Spreadsheet software for Windows 1988 software Corel software Assembly language software
27668304
https://en.wikipedia.org/wiki/DATEV
DATEV
DATEV is a registered cooperative society (i.e. "eG") that is primarily a technical information services provider for tax, accountant and attorneys. While it initially emerged as a data center service provider, it now provides software directly to end users (e.g. businesses) and consulting services for these occupations. The co-operative's focus is on the tax market. History DATEV was founded on 14 February 1966, by 65 tax agents in the Nuremberg area to provide accounting services to its clients with the help of a computer. Its original name was DATEV Data Processing Organization for Tax Agents and Related Tax Services Providers within the Federal Republic of Germany, a registered co-operative society with limited liability ("Datenverarbeitung und Dienstleistung für den steuerberatenden Beruf"). The initiators were Heinz Sebiger, the 1997 honorary citizen of Nuremberg, and Joachim Mattheus. The organisation was established to apply new applications of IT to accounting practices, because of an existing workforce shortage and the imminent introduction in 1968 of VAT. The initial DATEV services were to be completely data center based. All data was to be entered into data collection devices held "on-site" at each accountant's office which would then be sent as a tape by mail to the data center or "Registry" where they were processed and duly returned by mail to each respective accountant's office as processed tax return reports. The initial data center services were provided by IBM until DATEV established its own center in 1969. In 1974, DATEV offered its members a data by dial up service to replace the mail delivery of data on tape to the Registry. The resulting returns and reports (e.g. payrolls, accounting reports, taxes or stocks) continued to be delivered to members via post. In 1984 work commenced on creating software to allow end client data to be processed directly "in-house" (i.e. "on-site" in each members office) in preference to being processed at the data center "Registry". The first "in-house" software applications were made available to members in 1989. The initial "in-house" software was based on the IBM operating system known as OS/2. In 1998 the "in-house" software was made available to its members on Microsoft Windows, as it was clear that the OS/2 operating system was going to be phased out. In 1998, DATEV also began to focus on delivering more "in-house" business software services, consulting and training based on the increased demand by members. The rapid increase in PC performance and availability of high quality (laser) printers (in duplex A4/A3) on site made the "in-house" processing of end client data both faster and more flexible than data center processing. Legal The organisation is a registered cooperative. Membership is only open to members of the tax, accountancy and legal professions. Business In 1998 CEO Dieter Kempf led an initiative to add a second leg to its business by entering the legal services market. DATEV's takeover of the Hamburg-based company "MCT" brought the software "Phantasy" in its product portfolio. After a professionally and "politically" difficult start-up phase in the then crowded legal services market DATEV now offers "Software for lawyers" that are in the top five of their league including RA-MICRO, AnNoText, ReNoStar / ReNoFlex. The addition of legal services to DATEV's portfolio is still controversial given that lawyers and tax professionals are increasingly offering the same services in some areas and therefore becoming increasingly competitive to one another. DATEV first attempts to deliver "services for lawyers" meant finding qualified legal partners with the required legal and IT backgrounds. Soon after the successful establishment of the legal services market, the DATEV product range was extended again with the offer of "audit" software for accountants. A new development is DATEV's DATEVnet which provides business with Managed Security Services. A law firm's communication equipment (ISDN / DSL) can be connected to the DATEV data center via an exclusive dial-in VPN tunnel. DATEVnet users are then routed through a central security zone to DATEV over the Internet. The security zone includes a range of staggered and redundant protection systems, such as antivirus or firewall systems that are constantly monitored and updated by security specialists. The multi-level security system also offers a highly responsive level of data protection in an environment that is constantly assaulted new forms of data damaging programs such as viruses, trojan horses, etc. DATEV also offers secure e-mail communication with cascaded, centralized virus scanning and encrypted email. A special characteristic is the reverse scan, where copies of all e-mails that were sent to the users within DATEVnet are constantly reviewed over twelve-hour periods in a central storage buffer, where they are continuously scanned for data infections. Viruses, trojans and keyloggers are quickly detected minimizing any data unwanted infection or manipulation. To complete the protective belt a revers Web radar scan was also introduced into DATEVnet. The Web-radar is a multi-level security concept with both static and dynamic protection filters. Every web page accessed over the last 24 hours is reviewed to identify to origin of any infection or data corruption. Strategic direction In early 2000, internal disputes arose amongst the cooperative members with regard to DATEV's offer of software directly to end clients i.e. non-professionals. Two movements formed within the co-operative. Those in favor of the provision of software services directly to end clients and those opposed to it, i.e. "the co-operative should do all it can to be economically successful" vs "the co-operative should only do things that bring benefits to its members". As a result, the pro-members formed the "IDA" (Association of DATEV Users). Tension mounted between the pro-IDA and existing members who claimed that the "economic success" of the co-operative was being placed in jeopardy. The controversy resulted in the calling of an extraordinary members meeting on 18 February 2005. In the meeting, DATEV members voted to amend the existing co-operative statute by a majority of 86.5% and the approval and support of the IDA to allow for the provision of software directly to end clients. The provision of such direct services became known as "bound member client business" and requires the prior consent of the member consultant managing that end client before the direct services can be provided to them. Internal co-operative tensions have also arisen as a result of the extension of the provision of existing services to the international domain i.e. initially with the Czech Republic (2000), then Austria (2001), Italy (2001) and Poland in 2003. Despite the increasing tendency to reduce the use of "data center based" services, and to increase the use of "on-site" processing software, the DATEV co-operative has continued to increase its revenues and profits. This has benefited members who have received rebates on their fees. This provision of software directly to members for "on-site" processing has been available to members as a service since 2004 and has tended to reduce membership costs, particularly for smaller network installations. Products The company now offers additional software to accountants and lawyers for their office. These professionals and their clients can use programs for payroll and for financial accounting, current account bookkeeping, cost accounting and business consulting. There are programs for processing and interpretation of all major types of taxes (corporation tax, income tax, GuE, consumption tax, trade tax, inheritance tax, SchSt). The tax database LexInform provides comprehensive information on the German tax law tax for consultants and business information. In addition, it offers to business management consulting services for organization and data processing. Platforms DATEV assumes that customers are using Microsoft operating systems and application programs, or Microsoft Windows, Internet Explorer and Microsoft Office. Data storage has historically mostly been based on Microsoft SQL Server, and exclusively so since early 2006. DATEV has recently begun to provide partial program interfaces to OpenOffice.org and StarOffice but is not the first manufacturer in the industry to do so. E-mail encryption to date has been limited to passing within Microsoft programs only. In 2017, OpenOffice support was discontinued due to insufficient demand. The DATEV programs for payroll accounting also have approved interfaces to the ERP systems SAP Business One, SAP Business ByDesign and Microsoft Dynamics NAV. Sponsorship DATEV is a sponsor of the Challenge Roth. The contract was extended, DATEV will be the title sponsor for another 3 years from 2022. Subsidiaries DATEV.at GmbH / Vienna, Austria DATEV.cz sro / Brno, Czech Republic DATEV.pl sp. z o.o. / Warsaw, Poland DATEV koinos. Sr.l. / Milan, Italy DATEV.it S.p.A. / Assago, Italy Key data Turnover 2017: €978 million Members: 40.264 (June 30, 2018) Employees: 7.387 (June 30, 2018) References External links DATEV DATEV International Site Sinfopac Internacional, S.L.U - DATEV Partner for Solutions Information technology companies of Germany Cooperatives in Germany
14894
https://en.wikipedia.org/wiki/IIT%20Kanpur
IIT Kanpur
Indian Institute of Technology Kanpur (IIT Kanpur) is a public technical university located in Kanpur, Uttar Pradesh, India. It was declared to be an Institute of National Importance by the Government of India under the Institutes of Technology Act. Established in 1959 as one of the first Indian Institutes of Technology, the institute was created with the assistance of a consortium of nine US research universities as part of the Kanpur Indo-American Programme (KIAP). History IIT Kanpur was established by an Act of Parliament in 1959. The institute was started in December 1959 in a room in the canteen building of the Harcourt Butler Technological Institute at Agricultural Gardens in Kanpur. In 1963, the institute moved to its present location, on the Grand Trunk Road near the locality of Kalyanpur in Kanpur district. The campus was designed by Achyut Kavinde in a modernist style. During the first ten years of its existence, a consortium of nine US universities (namely MIT, UCB, California Institute of Technology, Princeton University, Carnegie Institute of Technology, University of Michigan, Ohio State University, Case Institute of Technology and Purdue University) helped set up IIT Kanpur's research laboratories and academic programmes under the Kanpur Indo-American Programme (KIAP). The first director of the institute was P. K. Kelkar (after whom the Central Library was renamed in 2002). Under the guidance of economist John Kenneth Galbraith, IIT Kanpur was the first institute in India to offer Computer science education. The earliest computer courses were started at IIT Kanpur in August 1963 on an IBM 1620 system. The initiative for computer education came from the Electrical engineering department, then under the chairmanship of Prof. H.K. Kesavan, who was concurrently the chairman of Electrical Engineering and head of the Computer Centre. Prof. Harry Huskey of the University of California, Berkeley, who preceded Kesavan, helped with the computer activity at IIT-Kanpur. In 1971, the institute began an independent academic program in Computer Science and Engineering, leading to MTech and PhD degrees. In 1972 the KIAP program ended, in part because of tensions due to the U.S. support of Pakistan. Government funding was also reduced as a reaction to the sentiment that the IIT's were contributing to the brain drain. The institute's annual technical festival, Techkriti, was first started in 1995. Campus IIT Kanpur is located on the Grand Trunk Road, west of Kanpur City and measures close to . This land was donated by the Government of Uttar Pradesh in 1960 and by March 1963 the institute had moved to its current location. The institute has around 6478 students with 3938 undergraduate students and 2540 postgraduate students and about 500 research associates. Noida Extension centre IIT Kanpur is to open an extension centre in Noida with the plan of making a small convention centre there for supporting outreach activities. Its foundation was laid on 4 December 2012 on 5 acres of land allocated by Uttar Pradesh state government in the sector-62 of Noida city, which is less than an hour's journey from New Delhi and the Indira Gandhi International Airport. The cost of construction is estimated to be about 25 crores. The new campus will have an auditorium, seminar halls for organising national and international conferences and an International Relations Office along with a 7-storey guest house. Several short-term management courses and refresher courses meant for distance learning will be available at the extension center.News from. IITK. Retrieved 9 October 2013. Helicopter service Being a major industrial town, Kanpur has a good connectivity by rail and by road but it lags behind in terms of air connectivity. IIT Kanpur was suffering significantly in comparison to IIT Delhi and IIT Bombay due to this reason as far as visiting companies and other dignitaries are concerned On 1 June 2013, a helicopter ferry service was started at IIT Kanpur run by Pawan Hans Helicopters Limited. In its initial run the service connects IIT Kanpur to Lucknow, but it is planned to later extend it to New Delhi. Currently there are two flights daily to and from Lucknow Airport with a duration of 25 minutes. Lucknow Airport operates both international and domestic flights to major cities. IIT Kanpur is the first academic institution in the country to provide such a service. The estimated charges are Rs. 6000 (US$100) per person. If anyone would like to avail the facility he/she have to contact Student Placement Office (SPO) at IIT Kanpur, since the helicopter service is subject to availability of chopper rights. The campus also has airstrips which allows flight workshops and joyrides for students. New York Office The institute has set up an office in New York with alumnus, Sanjiv Khosla designated as the overseas brand ambassador of the institute. It is located on 62, William Street, Manhattan. The office aims to hunt for qualified and capable faculty abroad, facilitate internship opportunities in North American universities and be conduit for research tie ups with various US universities. The New York Office also tries to amass funds through the alumni based there. A system that invites students and faculty of foreign institutes to IIT Kanpur is also being formulated. Organisation and administration Governance All IITs follow the same organization structure which has President of India as visitor at the top of the hierarchy. Directly under the president is the IIT Council. Under the IIT Council is the board of governors of each IIT. Under the board of governors is the director, who is the chief academic and executive officer of the IIT. Under the director, in the organizational structure, comes the deputy director. Under the director and the deputy director, come the deans, heads of departments, registrar. Departments The academic departments at IIT Kanpur are: Academics Undergraduate IIT Kanpur offers four-year BTech programs in Aerospace Engineering, Biological Sciences and Bio-engineering, Chemical Engineering, Civil Engineering, Computer Science and Engineering, Electrical Engineering, Materials Science and Engineering and Mechanical Engineering. The admission to these programs is procured through Joint Entrance Examination. IITK offers admission only to bachelor's degree now (discontinuing the integrated course programs), but it can be extended by 1 year to make it integrated, depending on the choice of student and based on his/her performance there at undergraduate level. IIT Kanpur also offers four-year B.S. Programs in Pure and Applied Sciences (Mathematics, Physics and Chemistry in particular), Earth Science and Economics. New academic system From 2011, IIT Kanpur has started offering a four-year BS program in sciences and has kept its BTech Program intact. Entry to the five-year MTech/MBA programs and Dual degree programme will be done based on the CPI of students instead of JEE rank. In order to reduce the number of student exams, IIT Kanpur has also abolished the earlier system of conducting two mid-term examinations. Instead, only two examinations (plus two quizzes in most courses depending on the instructor-in-charge, one before mid-semesters and the other after the mid-semesters and before the end-semesters examination), one between the semester and other towards the end of it would be held from the academic session starting July 2011 onward as per Academic Review Committee's recommendations. Postgraduate Postgraduate courses in Engineering offer Master of Technology (MTech), MS (R) and PhD degrees. The institute also offers two-tier MSc courses in areas of basic sciences in which students are admitted through Joint Admission Test for MSc (JAM) exam. The institute also offers M.Des. (2 years), M.B.A. (2 years) and MSc (2 years) degrees. Admissions to MTech is made once a year through Graduate Aptitude Test in Engineering. Admissions to M. Des are made once a year through both Graduate Aptitude Test in Engineering (GATE) and Common Entrance Exam for Design (CEED). Until 2011, admissions to the M.B.A. program were accomplished through the Joint Management Entrance Test (JMET), held yearly, and followed by a Group Discussion/Personal Interview process. In 2011, JMET was replaced by Common Admission Test (CAT). Admissions Undergraduate admissions until 2012 were being done through the national-level Indian Institute of Technology Joint Entrance Examination (IIT-JEE). Following the Ministry of Human Resource Development's decision to replace IIT-JEE with a common engineering entrance examination, IIT Kanpur's admissions are now based on JEE (Joint Entrance Examination) -Advanced level along with other IITs. Postgraduate admissions are made through the Graduate Aptitude Test in Engineering and Common Admission Test. Rankings Internationally, IIT Kanpur was ranked 277 in QS World University Rankings for 2022. It was ranked 65 in QS Asia Rankings 2020 and 25 among BRICS nations in 2019. The Times Higher Education World University Rankings ranked it 601–800 globally in the 2020 ranking 125 in Asia and 77 among Emerging Economies University Rankings 2020. In India, IIT Kanpur was ranked third among engineering colleges by India Today in 2021. It was ranked fourth among engineering colleges in India by the National Institutional Ranking Framework (NIRF) in 2020, and sixth overall. The Department of Industrial and Management Engineering was ranked 16 among management schools in India by NIRF in 2020. Laboratories and other facilities The campus is spread over an area of . Facilities include the National Wind Tunnel Facility. Other large research centres include the Advanced Centre for Material Science, a Bio-technology centre, the Advanced Centre for Electronic Systems, and the Samtel Centre for Display Technology, Centre for Mechatronics, Centre for Laser Technology, Prabhu Goel Research Centre for Computer and Internet Security, Facility for Ecological and Analytical Testing. The departments have their own libraries. The institute has its own airfield, for flight testing and gliding. PK Kelkar Library (formerly Central Library) is an academic library of the institute with a collection of more than 300,000 volumes, and subscriptions to more than 1,000 periodicals. The library was renamed to its present name in 2003 after Dr. P K Kelkar, the first director of the institute. It is housed in a three-story building, with a total floor area of 6973 square metres. The Abstracting and Indexing periodicals, Microform and CD-ROM databases, technical reports, Standards and thesis are in the library. Each year, about 4,500 books and journal volumes are added to the library. The New Core Labs (NCL) is 3-storey building with state of the art physics and chemistry laboratories for courses in the first year. The New Core Labs also has Linux and Windows computer labs for the use of first year courses and a Mathematics department laboratory housing machines with high computing power. IIT Kanpur has set up the Startup Innovation and Incubation Centre (SIIC) (previously known as "SIDBI" Innovation and Incubation Centre) in collaboration with the Small Industries development Bank of India (SIDBI) aiming to aid innovation, research, and entrepreneurial activities in technology-based areas. SIIC helps business Start-ups to develop their ideas into commercially viable products. A team of students, working under the guidance of faculty members of the institute and scientists of Indian Space Research Organisation (ISRO) have designed and built India's first nano satellite Jugnu, which was successfully launched in orbit on 12 Oct 2011 by ISRO's PSLV-C18. Computer Centre The Computer Centre is one of the advanced computing service centre among academic institution in India. IT hosts IIT Kanpur website and provides personal web space for students and faculties. It also provides a spam filtered email server and high speed fibre optic Internet to all the hostels and the academics. Users have multiple options to choose among various interfaces to access mail service. It has Linux and windows laboratories equipped with dozens of high-end software like MATLAB, Autocad, Ansys, Abaqus etc. for use of students. Apart from departmental computer labs, computer centre hosts more than 300 Linux terminals and more than 100 Windows terminals and is continuously available to the students for academic work and recreation. Computer centre has recently adopted an open source software policy for its infrastructure and computing. Various high-end compute and GPU servers are remotely available from data centre for user computation. Computer centre has multiple super computing clusters for research and teaching activity. In June 2014 IIT Kanpur launched their 2nd supercomputer which is India's 5th most powerful supercomputer as of now. The new supercomputer 'Cluster Platform SL230s Gen8' manufactured by Hewlett-Packard has 15,360 cores and a theoretical peak (Rpeak) 307.2 TFlop/s and is the world's 192th most powerful supercomputer as of June 2015. Students' research related activity Research is controlled by the Office of the Dean of Research and Development. Under the aegis of the Office the students publish the quarterly NERD Magazine (Notes on Engineering Research and Development) which publishes scientific and technical content created by students. Articles may be original work done by students in the form of hobby projects, term projects, internships, or theses. Articles of general interest which are informative but do not reflect original work are also accepted. The institute is part of the European Research and Education Collaboration with Asia (EURECA) programme since 2008. Along with the magazine a student research organisation, PoWER (Promotion of Work Experience and Research) has been started. Under it several independent student groups are working on projects like the Lunar Rover for ISRO, alternate energy solutions under the Group for Environment and Energy Engineering, ICT solutions through a group Young Engineers, solution for diabetes, green community solutions through ideas like zero water and zero waste quality air approach. Through BRaIN (Biological Research and Innovation Network) students interested in solving biological problems get involved in research projects like genetically modifying fruit flies to study molecular systems and developing bio-sensors to detect alcohol levels. A budget of Rs 1.5 to 2 crore has been envisaged to support student projects that demonstrate technology. Defence Assisting the Indian Ordnance Factories in not only upgrading existing products, but also developing new weapon platforms. Jugnu The students of IIT Kanpur made a nano satellite called Jugnu, which was given by president Pratibha Patil to ISRO for launch. Jugnu is a remote sensing satellite which will be operated by the Indian Institute of Technology Kanpur. It is a nanosatellite which will be used to provide data for agriculture and disaster monitoring. It is a 3-kilogram (6.6 lb) spacecraft, which measures 34 centimetres (13 in) in length by 10 centimetres (3.9 in) in height and width. Its development programme cost around 25 million rupees. It has a design life of one year. Jugnu's primary instrument is the Micro Imaging System, a near infrared camera which will be used to observe vegetation. It also carries a GPS receiver to aid tracking, and is intended to demonstrate a microelectromechanical inertial measurement unit. IITK Motorsports IITK motorsports is the biggest and most comprehensive student initiative of the college, founded in January 2011. It is a group of students from varied disciplines who aim at designing and fabricating a Formula-style race car for international Formula SAE (Society of Automotive Engineers) events. Most of the components of the car, except the engine, tyres and wheel rims, are designed and manufactured by the team members themselves. The car is designed to provide maximum performance under the constraints of the event, while ensuring the driveability, reliability, driver safety and aesthetics of the car are not compromised. Maraal UAVs Researchers at IIT Kanpur have developed a series of solar powered UAVs named MARAAL-1 & MARAAL-2. Development of Maraal is notable as it is the first solar powered UAV developed in India. Maraal-2 is fully indigenous. Student life National events Antaragni: Antaragni is a non-profit organisation run by the students of IIT Kanpur. It was funded entirely by the Student Gymkhana of the university it began. Today the budget is almost Rs 1 crore, raised through sponsorship. It began as an inter-collegiate cultural event in 1964, and now draws in over 1,00,000 visitors from 300 colleges in India Annual cultural festival held over 4 days in October. The festival includes music, drama, literary games, fashion show and quizzing. There is a YouTube channel dedicated to the festival with 1,000+ subscribers. Techkriti: It was started in 1995 with an aim to encourage interest and innovation in technology among students and to provide a platform for industry and academia to interact. Megabucks (a business and entrepreneurship festival) used to be held independently but was merged with Techkriti in 2010. Notable speakers at Techkriti have included APJ Abdul Kalam, Vladimir Voevodsky, Douglas Osheroff, Oliver Smithies, Rakesh Sharma, David Griffiths and Richard Stallman. Udghosh: Udghosh is IIT Kanpur's annual sports festival Usually held in September.It started in 2004 as inter- college sports meet organised by the institute. UDGHOSH involves students from all over India competing in the university's sports facilities. The festival includes Motivational Talks, Mini Marathon, Gymnastic Shows and Sport Quizzes to various sports events. Vivekananda Youth Leadership Convention: Vivekananda Samiti, under Students Gymkhana, on behalf of the IIT Kanpur, has undertaken the celebration of 150th Birth Anniversary of Swami Vivekananda from 2011 to 2015. The convention has included Kiran Bedi, Bana Singh, Yogendra Singh Yadav, Raju Narayana Swamy, Arunima Sinha, Rajendra Singh and other personalities from different fields in previous years. E-summit: It started in 2013. The first E-Summit was scheduled for 16–18 Aug 2013. These three-day festival by Entrepreneurship Cell, IIT Kanpur on the theme Emerge on the Radar included talks by eminent personalities, workshops and competitions. Students' Gymkhana The Students' Gymkhana is the students' government organization of IIT Kanpur, established in 1962. The Students' Gymkhana functions mainly through the Students' Senate, an elected student representative body composed of senators elected from each batch and the six elected executives: President, Students' Gymkhana. General Secretary, Media and Culture. General Secretary, Games and Sports. General Secretary, Science and Technology. UG Secretary, Academics and Career PG Secretary, Academics and Career The number of senators in the Students' Senate is around 50–55. A senator is elected for every 150 students of IIT Kanpur. The meetings of the Students' Senate are chaired by the chairperson, Students' Senate, who is elected by the Senate. The Senate lays down the guidelines for the functions of the executives, their associated councils, the Gymkhana Festivals and other matters pertaining to the Student body at large. The Students' Senate has a say in the policy and decision making bodies of the institute. The president, Students' Gymkhana and the chairperson, Students' Senate are among the special invitees to the Institute Academic Senate. The president is usually invited to the meetings of the board of governors when matters affecting students are being discussed. Nominees of the Students' Senate are also members of the various standing Committees of the Institute Senate including the disciplinary committee, the Undergraduate and Postgraduate committee, etc. All academic departments have Departmental Undergraduate and Post Graduate Committees consisting of members of the faculty and student nominees. Notable alumni See also Indian Institutes of Technology National Institutes of Technology Institutes of National Importance References External links Kanpur Engineering colleges in Uttar Pradesh Universities and colleges in Kanpur Educational institutions established in 1959 1959 establishments in Uttar Pradesh
43439629
https://en.wikipedia.org/wiki/Software%20Process%20simulation
Software Process simulation
Software process simulation modelling: Like any simulation, software process simulation (SPS) is the numerical evaluation of a mathematical model that imitates the behavior of the software development process being modeled. SPS has the ability to model the dynamic nature of software development and handle the uncertainty and randomness inherent in it. Uses of software process simulation Following main purposes have been proposed for SPS: Support in operational project management (estimation, planning and control) Support for strategic management Tool for training and education for software project management and software development lifecycle (c.f. and). Process improvement and technology adoption How to do software process simulation Software process simulation starts with identifying a question that we want to answer. The question could be, for example, related to assessment of an alternative, incorporating a new practice in the software development process. Introducing such changes in the actual development process will be expensive and if the consequences of change are not positive the implications can be dire for the organization. Thus, through the use of simulation we attempt to get an initial assessment of such changes on the model instead of an active development project. Based on this problem description an appropriate scope of the process is chosen. A simulation approach is chosen to model the development process. Such a model is then calibrated using empirical data and then used to conduct simulation based investigations. A detailed description of each step in general can be found in Balci's work, and in particular for software process simulation a comprehensive overview can be found in Ali et al. In a recent initiative, by ACM special interest group on software engineering (SIGSOFT), a standard for assessing simulation-based scientific studies has been proposed. Examples of using software process simulation for practical issues in industrial settings Process assessment: Enabling dynamic analysis in value stream mapping in industrial settings Software Testing: Deciding when to automate software testing Key venues Software process simulation has been an active research area for many decades some of the key venues include the International Conference on Software and Systems Process and its predecessor Workshop on Software Process Simulation Modeling (ProSim) from 1998-2004. References Simulation
36374207
https://en.wikipedia.org/wiki/Parsytec
Parsytec
ISRA VISION PARSYTEC AG is a company of ISRA VISION AG and was founded in 1985 as Parsytec (PARallel SYstem TEChnology) in Aachen, Germany. Parsytec has become known in the late 1980s and early 1990s as a manufacturer of transputer-based parallel systems. Products ranged from a single transputer plug-in board for the IBM PC up to large massively-parallel systems with thousands of transputers (or processors, respectively) such as the Parsytec GC. Some sources call the latter ultracomputer sized, scalable multicomputers (smC). As part of the ISRA VISION AG, today the company focusses on solutions in the machine vision and industrial image procession sector. The ISRA Parsytec products are used for quality and surface inspection especially in the metal and paper industries. History In 1985, Parsytec was founded by Falk-Dietrich Kübler, Gerhard H. Peise, and Bernd Wolff in Aachen, Germany, with an 800000 DM grant from Federal Ministry for Research and Technology (BMFT). In contrast to SUPRENUM, Parsytec directly aimed their systems (pattern recognition) at industrial applications such as surface inspection. Therefore, they not only had a substantial market share in the European academia but they could also win many industrial customers. This included many customers outside Germany. In 1988, export accounted for roughly a third of Parsytec's turnover. Turnover figures were: nil in 1985, 1.5M DM in 1986, 5.2M DM in 1988, 9M DM in 1989, and 15M DM in 1990, 17M USD in 1991. In order to focus Parsytec on research and development, ParaCom was founded. ParaCom thence took care of the sales and marketing side of the business. Parsytec/ParaCom's headquarters were maintained in Aachen (Germany), however they had subsidiary sales offices in Chemnitz (Germany), Southampton (United Kingdom), Chicago (USA), St Petersburg (Russia) and Moscow (Russia). In Japan, the machines were sold by Matsushita. Between 1988 and 1994, Parsytec built quite an impressive range of transputer based computers having its peak in the "Parsytec GC" (GigaCluster) which was available in versions using 64 up to 16384 transputers. Parsytec had its IPO in mid-1999 at the German Stock Exchange in Frankfurt. On Apr, 30, 2006 founder Falk-D. Kübler left the company. In July 2007, 52.6% of the Parsytec AG were acquired by ISRA VISION AG. The delisting of Parsytec shares from the stock market started December the same year. And since 18 April 2008, the Parsytec share is no longer listed on the stock exchange. Whilst workforce at Parsytec was roundabout 130 staff in the early 1990s, the ISRA VISION Group had more than 500 employees in 2012/2013. Today, the core business of ISRA Parsytec within the ISRA VISION Group is the development and distribution of surface inspection systems for strip products in the metal and paper industries. Products/Computers Parsytec's product range included: Megaframe (T414/T800) --- one per board, up to ten boards in a rack or as plug-in boards MultiCluster (T800) --- up to 64 processors in a single rack SuperCluster (T800) --- 16 to 1024 processors in a single frame GigaCluster (planned: T9000; realized: T800 or MPC 601) --- 64 to 16384 processors in "cubes" x'plorer (T800 or MPC 601) Cognitive Computer (MPC 604 and Intel Pentium Pro) Powermouse (MPC 604) In total, some 700 stand-alone systems (SC and GC) had been shipped. In the beginning, Parsytec had participated in the GPMIMD (General Purpose MIMD) project under the umbrella of the ESPRIT project, both being funded by the European Commission's Directorate for Science. However, after substantial divisions with the other participants, Meiko, Parsys, Inmos and Telmat, as regards the choice of a common physical architecture, Parsytec left the project and announced a T9000-based machine of their own, i.e. the GC. But due to Inmos' problems with the T9000, they were forced to change to the ensemble Motorola MPC 601 CPUs and Inmos T805. This led to Parsytec's "hybrid" systems (e.g. GC/PP) degrading transputers to communication processors whilst the compute work was offloaded to the PowerPCs. Parsytec's cluster systems were operated by an external workstation, typically a SUN workstation (e.g. Sun-4). There is a substantial confusion as regards the names of the Parsytec products. On the one hand this has to do with the architecture, but on the other hand it had to do with the aforementioned non-availability of Inmos T9000 that forced Parsytec to use the T805 and the PowerPC instead. Systems that were equipped with PowerPC processors had the prefix "Power". For what concerns the architecture of GC systems, an entire GigaCluster is made up of self-contained GigaCubes. The basic architectural element of a Parsytec system was a cluster which consisted inter alia of four transputers/processors (i.e. a cluster is a node in the classical sense). A GigaCube (sometimes referred to as supernode/meganode) consisted of four clusters (nodes) with 16 Inmos T805 transputers (30 MHz), RAM (up to 4 MB per T805), plus a further redundant T805 (an additional, thus the 17th processor), the local link connections and four Inmos C004 routing chips. Hardware fault tolerance was provided by linking each of the T805 to a different C004. The unusual spelling of x'plorer led to xPlorer and the Gigacluster is sometimes referred to as the Gigacube or Grand Challenge. Megaframe Megaframe was the product name of a family of transputer based parallel processing modules. Some of which could be used to upgrade an IBM PC. As a standalone system, a Megaframe system could hold up to ten processor modules. Different versions of the modules were available, for example, one with a 32-bit transputer T414, floating-point hardware Motorola 68881, 1 MB (80 nanosecond access time) of RAM and a throughput of 10 MIPS, or one with four 16-bit transputers T22x with 64 kB of RAM. Also cards for special features were on offer, such as a graphics processor with a resolution of 1280 x 1024 pixels or I/O-"cluster" with terminal and SCSI interface. Multicluster MultiCluster-1 series were statically configurable systems and could be tailored to specific user requirements such as number of processors, amount of memory, and I/0 configuration, as weil as system topology. The required processor topology could be configured by using UniLink connection; fed through the special back plane. In addition, four external sockets were provided. Multicluster-2 used network configuration units (NCUs) that provided flexible, dynamically configurable interconnection networks. The multiuser envirorunent could support up to eight users by using Parsytec's multiple virtual architecture software. The NCU design was based on the Inmos crossbar switch, the C004, which gives full crossbar connectivity for up to 16 transputers. Each NCU, made of C004s, connected up to 96 UniLinks that link internal as weil as external transputers and other I/0 subsystems. MultiCluster-2 provided the ability to configure a variety of fixed interconnection topologies such as tree or mesh structures. SuperCluster SuperCluster (picture) had a hierarchical, cluster-based design. A basic unit was a 16-transputer T800, fully connected cluster; larger systems had additional levels of NCUs to form necessary connections. The Network Configuration Manager (NCM) software controlled the NCUs and dynamically established the required Connections. Each transputer could be equipped with 1 to 32 MB of dynamicic RAM with single-error correction and double-error detection. GigaCluster The GigaCluster (GC) was a parallel computer which was produced in the early 1990s. A Giga Cluster was made up of Giga Cubes. Being designed for the Inmos T9000-transputers, it could never be launched as such, since the Inmos T9000 transputers itself never made it to the market in good time. This led to the development of the GC/PP (PowerPlus) in which two Motorola MPC 601 (80 MHz) were used as the dedicated CPUs supported by four transputers T805 (30 MHz) Whilst the GC/PP was a hybrid system, the GCel ("entry level") was based on T805 only. The GCel was supposed to be upgradeable to the T9000 transputers (had they come early enough), thus becoming a full GC. As the T9000 was Inmos' evolutionary successor of the T800, upgrading was planned to be simple and straightforward because, firstly, both transputers shared the same instruction set and, secondly, they also had quite a similar performance ratio of compute power versus communication throughput. Therefore, a theoretical a speed-up factor of 10 was expected but in the end it was never reached. The network structure of the GC was a two-dimensional lattice with an inter-communication speed between the nodes (i.e. clusters in Parsytec's lingo) of 20 Mbit/s. For the time, the concept of the GC was exceptionally modular and thus scalable. A so-called GigaCube was a module that was already a one gigaflop system; furthermore, it was the building block for greater systems. A module (i.e. cube in Parsytec's lingo) contained four clusters of which each was equipped with 16 transputers (plus a further transputer for redundancy, thus making it 17 transputers per cluster), 4 wormhole routing chips (C104 for the planned T9000 and C004 with the realized T805), a dedicated power supply and communications ports. By combining modules (or cubes, respectively,) one could theoretically connect up to 16384 processors to a very powerful system together. Typical installations were: The two largest installations of the GC, which were actually shipped, had 1024 processors (16 modules, with 64 transputers per module) and were operated at the data centers of the Universities of Cologne and Paderborn. In October 2004, the latter had been given to the Heinz Nixdorf Museums Forum where it is inoperable now. The power consumption of a system with 1024 processors was approximately 27 kW, the weight was almost a ton. In 1992, the system priced about 1.5M DM. While the smaller versions up to GC-3 were air-cooled, water cooling was mandatory for the larger systems. In 1992, a GC with 1024 processors reached a placement in the TOP500 list of the world's fastest supercomputer installations. In Germany alone, it was number 22 of the fastest computers. In 1995, there were nine Parsytec computers in the Top500 list of which two GC/PP 192 installations ranked 117 and 188 in the TOP500 list. And in 1996, they still ranked 230 and 231 in the TOP500 list. x'plorer The x'plorer model came in two versions: The initial version was featuring 16 transputers, each having access to 4MB RAM and called just x'plorer. Later when Parsytec generally switched to the PPC architecture, it was called POWERx'plorer and featured 8 MPC 601 CPUs. Both models came in the same gorgeous desktop case (designed by Via 4 Design). In any model, the x'plorer was more or less a single "slice" — Parsytec called them cluster (picture) — of a GigaCube (PPC or Transputer), which used 4 of those clusters in its smallest version (GC-1). Thus, some call it a "GC-0.25". The POWERx'plorer was based on 8 processing units arranged in a 2D mesh. Each processing unit had one 80 MHz MPC 601 processor, 8 MB of local memory and a transputer for establishing and maintaining communication links. Cognitive Computer The Parsytec CC (Cognitive Computer) (picture) system was an autonomous unit at the card rack level. The CC card rack subsystem provided the system with its infrastructure including power supply and cooling. The system could be configured as a standard 19 rack mountable unit which accepted the various 6U plug-in modules. The CC system was a distributed memory, message passing parallel computer and is globally classified into the MIMD category of parallel computers. There were two different versions available CCe: based on Motorola MPC 604 processor running at 133 MHz with 512 KB L2-cache. The modules were connected together at 1 Gbit/s with high speed (HS) link technology according to the IEEE 1355 standard, allowing data transfer at up to 75 MB/s. The communication controller was integrated in the processor nodes through the PCI bus. The system board used the MPC 105 chip to provide memory control, DRAM refresh and memory decoding for banks of DRAM and/or Flash. The [CPU] bus speed is limited to 66 MHz while the PCI bus speed was 33 MHz at maximum. CCi: based on Intel Pentium Pro its core elements were dual Pentium Pro-based motherboards (at 266 MHz) which were interconnected using several high speed networks. Each dual motherboard had 128 Mbyte of memory. Each node had a peak performance of 200 MFLOPS. The product spectrum comprised single-processor or SMP-boards up to a 144 node system, a large variety of PCI cards and also different communication solutions (Gigabit HS-Link, Myrinet, SCI, ATM or Fast-Ethernet). The operating systems was Windows NT 4.0 and ParsyFRame (UNIX environment was optional). In all CC-systems, the nodes were directly connected to the same router which implemented an active hardware 8 by 8 crossbar switch for up to 8 connections using the 40 MBytes/s high-speed link. For what concerns the CCe, the software was based on IBM's AIX 4.1 UNIX operating system together with Parsytec's parallel programming environment Embedded PARIX (EPX). Thus, it combined a standard UNIX environment (compilers, tools, libraries) with an advanced software programming development environment. The system was integrated to the local area network using standard Ethernet. Therefore, a CC node had a peak performance of 266 MFlops. The peak performance of the 8-node CC system installed at Geneva University Hospital was therefore 2.1 GFlops. Powermouse Powermouse was another scalable system that consisted of modules and individual components. It was a straightforward extension of the x'plorer-system. Each module (dimensions: 9 cm x 21 cm x 45 cm) contained four MPC 604 processors (200/300 MHz) and 64 MB RAM attaining a peak performance of 2.4 Gflop/s. A separate communication processor T425) equipped with 4 MB RAM, controlled the data flow in four directions to other modules in the system. The bandwidth of a single node was 9 MB/s For about 35000 DM a basic system consisting of 16 CPUs (i.e. four modules) could provide a total computing power of 9.6 Gflop/s. As was with all Parsytec products, Powermouse required a Sun Sparcstation as the front-end. All software (PARIX with C++ and Fortran 77 compilers and debuggers (alternatively providing MPI or PVM as user interfaces) was included. Operating system The operating system used was PARIX (PARallel UnIX extensions) (PARIXT8 for the T80x transputers and PARIXT9 for the T9000 transputers, respectively). Based on UNIX, PARIX supported remote procedure calls, it was compliant with the POSIX standard. PARIX provided UNIX functionality at the front-end (e.g. a Sun SPARCstation which had to be purchased separately) with library extensions for the needs of the parallel system at the backend which was precisely the Parsytec product itself (it was connected to the front-end by which it was operated). The Parix software package comprised components for the program development environment (compilers, tools, etc.) and runtime environment (libraries). PARIX offered different types of synchronous and asynchronous communication. In addition, Parsytec provided a parallel programming environment called Embedded PARIX (EPX). To develop parallel applications using EPX, data streams and function tasks were allocated to a network of nodes. The data handling between processors required just a few system calls. Standard routines for synchronous communication such as send and receive were available as well as asynchronous system calls. The full set of EPX calls established the EPX application programming interface (API). The destination for any message transfer was defined through a virtual channel that ended at any user defined process. Virtual channels were user defined and managed by EPX. The actual message delivery system software utilised the router. Moreover, one could also run COSY (Concurrent Operating SYstem) and Helios on the machines. Helios supported the special reset-mechanism of Parsytec out of the box. See also INMOS SUPRENUM Meiko Scientific Thinking Machines Corporation References External links Homepage of ISRA VISION PARSYTEC AG Ram Meenakshisundaram's Transputer Home Page at classiccmp.org 16384 Prozessoren bringen 400 Gflops Transputer-Superrechner von Parsytec als neuer Weltmeister Article at computerwoche.de'' (German) Zur Strategie von Parsytec Kuebler: "In zehn Jahren rechnen die meisten Computer parallel" Oct 1, 1993, at computerwoche.de (German) The FTMPS-Project: Design and Implementation of Fault-tolerance Techniques for Massively Parallel Systems J. Vounckx et al. Homepage of Via 4 Design Supercomputers Massively parallel computers Parallel computing Manufacturing companies based in Aachen
2155259
https://en.wikipedia.org/wiki/Skip%20%28audio%20playback%29
Skip (audio playback)
A skip occurs when a phonograph (gramophone), cassette tape or compact disc player malfunctions or is disturbed so as to play incorrectly, causing a break in sound or a jump to another part of the recording. Vinyl gramophone records Vinyl records are easily scratched and vinyl readily acquires a static charge, attracting dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops and, in extreme cases, they can cause the needle (stylus) to skip over a series of grooves, or worse yet, cause the needle to skip backwards, creating an unintentional locked groove that repeats the same 1.8 seconds (at 33⅓ RPM) or 1.3 seconds (at 45 RPM) of track over and over again. Locked grooves are not uncommon and are even heard occasionally in broadcasts. The locked groove gave rise to the expression "broken record" referring to someone who continually repeats the same statement with little if any variation. Compact Discs A "skip" or "jump" is when the laser of a Compact Disc player cannot read the faulty groove or block of data. Skips are usually caused by marks blocking the path of the beam to the disc, e.g. a finger mark, hair, dirt in general, or a scratch. Since the read mechanism has very little contact with the disc's surface and the data itself is not on the outer layer of the disc, the blockage is not a physical issue as with a record, but rather reflective. Basic players Early CD players were very basic in nature. A Laser tracks the blocks of data from the centre of the disc outwards, while the disc itself revolves at a variable speed between a starting speed of 495 RPM, and a minimum finishing speed of 212 RPM. Generally, one cycle constituted one block of data. If there is a faulty block of data, the player may do one of the following: Repeat the previous block of audio Skip the faulty block Try and retry to read it causing a stopping and starting of the music A player may utilise one or more of these techniques, depending on how faulty the data is. In the case of severe, irrecoverable damage to the data, the player may try to rescan the disc to relocate its position. In this case, the machine may make a series of audible chirping noises as the laser is moving from the faulty block to the data information area and back again. Later players When CD players began their induction into battery-powered portable machines and vehicles, a skip could happen even for simple movement such as walking, vehicles jerking etc. Therefore a strategy was needed to try to prevent this. When certain techniques were tested and failed, the most successful and popular method to date was to spin the disc faster in order to read a chunk of the data into memory while playing. This meant that the player itself could concentrate on reading while the software controlling the buffers and memory distribution could also act as the audio feed. In the case of a minor error, the disc's rotation would again speed up to facilitate several attempts to read the data. A technique was also developed for testing data to prevent severe skipping. These two techniques were largely successful, unless of course the data was damaged beyond repair, in which case the audio may stop, or skip as before. CD ROM drives In a computer, the CD-ROM drive is governed by the program controlling it. In most cases, the BIOS has rudimentary access to the drive for boot purposes, while operating systems usually come bundled with their own drivers. The drive itself has very little instruction, apart from direct instructions, such as spin up, read data etc. When playing a CD in the computer, the media player of choice is giving instructions to the CD drive, whether through the operating system's drivers or by accessing the device's low-level interface itself. Usually, Similar to modern players, the media player will be reading audio into memory for later playback, especially given the extreme speeds used by CD ROM drives in order to access raw data on other discs. Because of this, if there is a fault during playback, the player will already be performing a checksum to verify the data read is correct. If it is wrong, the audio is usually stopped depending on the player. Cassette tapes Cassette tape players can cause skips when the tape being played is worn or in some other way damaged. Since the tape is rolled into a reel, it depends on how the tape runs across the roll as to how the skip affects playback. Indeed, some early artists such as the Beatles deliberately rearranged the tape reels in order to produce loops used in their recordings. Computer audio Electronic media on a computer can often skip. Such media may include compressed and uncompressed audio, and video containing audio. Generally this cannot happen to music instructive files such as MIDI or Mod files, however depending on the circumstance single notes may become "jammed", when the note off message is not received by the playback device. Computer skips can be caused by lack of available RAM or processing power, damaged storage mediums (CD, hard drive etc.), a crash in the playback software, or a corrupted, incomplete or damaged audio file. Depending on the player and the operating system, a skip usually consists of a 50MS, 300MS, 500MS or a one-second loop, depending on the size or length of the chunk of data that is currently loaded into memory. Skipping as a musical component Compact Disc skipping is prevalent in glitch music. See also Gramophone record limitations Wow (recording) Electronic skip protection Optical media preservation References Recorded music
899874
https://en.wikipedia.org/wiki/F-Script%20%28programming%20language%29
F-Script (programming language)
F-Script is an object-oriented scripting programming language for Apple's macOS operating system developed by Philippe Mougin. F-Script is an interactive language based on Smalltalk, using macOS's native Cocoa API. Overview F-Script is based on a pure object paradigm: every entity manipulated within the language is an object. Its base syntax and concepts are identical to those of the language Smalltalk (the canonical example of an object-oriented language) with specific extensions to support array programming as in the language APL. F-Script provides an interpreted, interactive environment with support for workspaces, which provide a rich set of functions including object persistence, distributed objects, graphical user interface (GUI) framework, database access, among other things. Syntax Like Smalltalk, F-Script's syntax is very simple, without requiring specific notation for control structures which are provided in a unified manner by the message send operation. Unlike Smalltalk, F-Script provides specific notational extensions to support the Array class, using curly brackets to describe literal arrays, which may contain any F-Script expressions. For example, {1+3, 'name', true} is a valid array literal. The empty array is denoted by {}. Arrays of arrays are supported transparently, since any array is just another object. Message sending Message expressions in F-Script are similar to those in Smalltalk: they specify which object is the receiver of the message, which operation is called by the message, and any argument objects needed by the operation. F-Script supports unary, binary, and keyword messages. F-Script message semantics are extended to support array programming by recognizing that an array operation, such as adding to numerical vectors, must be viewed as generating a number of messages relating the elements of the vectors involved Thus, if A = {1, 2, 3} and B = {10, 20, 30}, then F-Script allows A + B = {11, 22, 33}. Usage F-Script is chiefly used as a lightweight scripting layer on top of macOS's Cocoa application programming interface (API). It can be embedded in applications using the F-Script framework and Interface Builder palettes. It can also be used interactively from the F-Script interpreter to prototype applications. Finally, it can be used to explore applications' object hierarchies using an injector such as F-Script Anywhere. Forks The original F-script development by Philippe Mougin stopped at version 2.1 in 2011. Ilya Kulakov (Kentzo) took over the FScript.org website and updated the program to work with Mac OS X 10.7 through 10.10 until version 2.3 of 2014, building off Jonathan Mitchell's modernization work. Kulakov noted that as F-Script ties deeply into the system, the code must be changed to reflect the framework available in each Mac OS X release. The last update to this chain of work was done in 2018, by Wolfgang Baird, who updated F-Script to work with Mac OS X 10.12. References External links F-Script Google Techtalk Array programming languages Class-based programming languages Dynamically typed programming languages Object-oriented programming languages Scripting languages Smalltalk programming language family
49012129
https://en.wikipedia.org/wiki/2016%20USC%20Trojans%20football%20team
2016 USC Trojans football team
The 2016 USC Trojans football team represented the University of Southern California in the 2016 NCAA Division I FBS football season. They played their home games at the Los Angeles Memorial Coliseum as part of the South Division of the Pac-12 Conference. They were led by head coach Clay Helton in his first full season after replacing Steve Sarkisian in the sixth game of the 2015 season. They finished the season 10–3, 7–2 in Pac-12 play to finish in second place in the South Division. They were invited to the Rose Bowl where they defeated Big 10 conference champion Penn State. Personnel Coaching staff Roster Returning starters USC returns 32 starters in 2016, including 17 on offense, 13 on defense, and two on special teams. Key departures include Cody Kessler (QB – 14 games), Tre Madden (TB – 6 games), Jahleel Pinner (FB – 4 games), Max Tuerk (C – 5 games), Antwaun Woods (NT – 13 games), Delvon Simmons (DT – 14 games), Greg Townsend Jr. (DE – 13 games), Claude Pelon (DE – 2 games), Su'a Cravens (LB – 14 games), Scott Felix (LB – 10 games), Anthony Sarao (LB – 13 games), Lamar Dawson (LB – 1 game), Kevon Seymour (CB – 4 games), Alex Wood (K – 13 games), Kris Albarado (P – 14 games). Other departures include Soma Vainuku (FB), Connor Spears (TE). Offense (17) Defense (13) Special teams (2) Depth chart Official Depth Chart 2016 True Freshman Double Position : * Recruiting class Scholarship distribution chart / / * : Former walk-on – 85 scholarships permitted, 79 currently allotted to players. (Note: Max Browne, E.J. Price, Noah Jefferson, Jabari Ruffin & Scott Felix is still "counts" for 2016, bringing the total to 84). – . – USC can sign 25 players in the class of 2017. 2016 NFL Draft Schedule Game summaries Alabama (Q1, 12:33) USC – #39 Matt Boermeester 47 yard Field Goal – USC 3–0 (Q2, 7:54) ALA – #13 ArDarius Stewart 39 yard pass from #2 Jalen Hurts (#99 Adam Griffith kick) – ALA 7–3 (Q2, 3:30) ALA – #99 Adam Griffith 29 yard Field Goal – ALA 10–3 (Q2, 2:48) ALA – #26 Marlon Humphrey 18 yard interception return (#99 Adam Griffith kick) – ALA 17–3 (Q3, 13:53) ALA – #13 ArDarius Stewart 71 yard pass from #2 Jalen Hurts (#99 Adam Griffith kick) – ALA 24–3 (Q3, 11:01) ALA – #2 Jalen Hurts 7 yard run, (#99 Adam Griffith kick) – ALA 31–3 (Q3, 6:00) ALA – #2 Jalen Hurts 6 yard run, (#99 Adam Griffith kick) – ALA 38–3 (Q3, 2:38) USC – #39 Matt Boermeester 41 yard Field Goal – ALA 38–6 (Q4, 13:16) ALA – #9 Bo Scarbrough 6 yard run, (#99 Adam Griffith kick) – ALA 45–6 (Q4, 9:34) ALA – #14 Gehrig Dieter 45 yard pass from #8 Blake Barnett (#99 Adam Griffith kick) – ALA 52–6 Utah State (Q1, 11:55) USC – #9 JuJu Smith-Schuster 3 yard pass from #4 Max Browne, (#39 Matt Boermeester kick) – USC 7–0 (Q2, 8:08) USC – #80 Deontay Burnett 13 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 14–0 (Q2, 0:24) USC – #7 Steven Mitchell Jr. 2 yard pass from #4 Max Browne, (#39 Matt Boermeester kick) – USC 21–0 (Q3, 6:43) USC – #39 Matt Boermeester 20 yard Field Goal – USC 24–0 (Q3, 4:06) USC – #2 Adoree' Jackson 77 yard punt return (#39 Matt Boermeester kick) – USC 31–0 (Q3, 0:01) UTS – #83 Wyatt Houston 6 yard pass from #2 Kent Myers, (#63 Brock Warren kick) – USC 31–7 (Q4, 9:50) USC – #9 JuJu Smith-Schuster 15 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 38–7 (Q4, 2:48) USC – #28 Aca'Cedric Ware 2 yard run, (#39 Matt Boermeester kick) – USC 45–7 Stanford (Q1, 4:30) STA – #5 Christian McCaffrey 56 yard pass from #17 Ryan Burns, (#34 Conrad Ukropina kick) – STA 7–0 (Q1, 0:31) USC – #39 Matt Boermeester 47 yard Field Goal – STA 7–3 (Q2, 9:34) STA – #34 Conrad Ukropina 31 yard Field Goal – STA 10–3 (Q2, 2:14) STA – #5 Christian McCaffrey 1 yard run – STA 17–3 (Q3, 11:35) USC – #25 Ronald Jones II 1 yard run, (#39 Matt Boermeester kick) – STA 17–10 (Q3, 5:46) STA – #34 Conrad Ukropina 42 yard Field Goal – STA 20–10 (Q3, 3:37) STA – #3 Michael Rector 56 yard run, (#34 Conrad Ukropina kick) – STA 27–10 Utah (Q1, 7:26) UTA – #3 Troy Williams 10 yard run, (#39 Andy Phillips kick) – UTA 7–0 (Q1, 7:19) USC – #2 Adoree' Jackson 100 yard kick return (#39 Matt Boermeester kick) – USC 7–7 (Q2, 6:48) UTA – #39 Andy Phillips 36 yard Field Goal – UTA 10–7 (Q2, 4:48) USC – #22 Justin Davis 14 yard run, (#39 Matt Boermeester kick) – USC 14–10 (Q2, 0:40) USC – #39 Matt Boermeester 32 yard Field Goal – USC 17–10 (Q3, 9:54) USC – #14 Sam Darnold 8 yard run, (#39 Matt Boermeester kick) – USC 24–10 (Q3, 5:02) UTA – #54 Isaac Asiata recovered fumble, (#39 Andy Phillips kick) – USC 24–17 (Q4, 15:00) USC – #39 Matt Boermeester 43 yard Field Goal – USC 27–17 (Q4, 9:54) UTA – #11 Raelon Singleton 11 yard pass from #3 Troy Williams, (#39 Andy Phillips kick) – USC 27–24 (Q4, 0:16) UTA – #12 Tim Patrick 11 yard pass from #3 Troy Williams, (#39 Andy Phillips kick) – UTA 31–27 Arizona State (Q1, 10:00) ASU – #5 Zane Gonzalez 40 yard Field Goal – ASU 3–0 (Q1, 4:38) USC – #9 JuJu Smith-Schuster 5 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 7–3 (Q1, 0:45) ASU – #5 Zane Gonzalez 34 yard Field Goal – USC 7–6 (Q2, 10:03) USC – #9 JuJu Smith-Schuster 3 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 14–6 (Q2, 4:45) USC – #39 Matt Boermeester 49 yard Field Goal – USC 17–6 (Q2, 2:41) USC – #22 Justin Davis 37 yard run, (#39 Matt Boermeester kick) – USC 24–6 (Q2, 0:25) USC – #39 Matt Boermeester 46 yard Field Goal – USC 27–6 (Q3, 11:27) USC – #9 JuJu Smith-Schuster 67 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 34–6 (Q3, 8:27) USC – #14 Sam Darnold 3 yard run, (#39 Matt Boermeester kick) – USC 41–6 (Q4, 8:45) ASU – #80 Raymond Epps 13 yard pass from #2 Brady White, (#5 Zane Gonzalez kick) – USC 41–13 (Q4, 2:00) ASU – #22 Nick Ralston 10 yard run, (#5 Zane Gonzalez kick) – USC 41–20 Colorado (Q1, 3:38) USC – #88 Daniel Imatorbhebhe 32 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 7–0 (Q2, 3:54) USC – #82 Tyler Petite 11 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 14–0 (Q3, 11:34) COL – #23 Phillip Lindsay 67 yard pass from #4 Bryce Bobo, (#49 Davis Price kick) – USC 14–7 (Q4, 10:26) COL – #4 Bryce Bobo 12 yard pass from #12 Steven Montez, (#49 Davis Price kick) – USC 14–14 (Q4, 8:34) USC – #82 Tyler Petite 8 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 21–14 (Q4, 4:55) COL – #49 Davis Price 42 yard Field Goal – USC 21–17 Arizona (Q1, 10:34) USC – #25 Ronald Jones II 5 yard run, (#39 Matt Boermeester kick) – USC 7–0 (Q1, 5:44) ARI – #14 Khalil Tate 3 yard run, (#30 Josh Pollack kick) – USC 7–7 (Q1, 0:13) USC – #80 Deontay Burnett 11 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 14–7 (Q2, 13:50) USC – #88 Daniel Imatorbhebhe 32 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 21–7 (Q2, 4:14) USC – #9 JuJu Smith-Schuster 3 yard pass from #14 Sam Darnold – USC 27–7 (Q2, 0:34) USC – #9 JuJu Smith-Schuster 39 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 34–7 (Q3, 10:37) USC – #9 JuJu Smith-Schuster 46 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 41–7 (Q4, 13:16) ARI – #10 Samajie Grant 7 yard pass from #15 Matt Morin, (#30 Josh Pollack kick) – USC 41–14 (Q4, 11:38) USC – #28 Aca'Cedric Ware 21 yard run, (#39 Matt Boermeester kick) – USC 48–7 California (Q1, 10:00) USC – #1 Darreus Rogers 3 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 7–0 (Q1, 4:34) USC – #25 Ronald Jones II 16 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 14–0 (Q2, 12:13) USC – #80 Deontay Burnett 13 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 21–0 (Q2, 08:04) CAL – #5 Tre Watson 22 yard pass from #7 Davis Webb, (#9 Matt Anderson kick) – USC 21–7 (Q2, 1:54) CAL – #9 Matt Anderson 27 yard Field Goal – USC 21–10 (Q2, 0:35) USC – #1 Darreus Rogers 20 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 28–10 (Q3, 9:25) CAL – #7 Davis Webb 1 yard run, (#9 Matt Anderson kick) – USC 28–17 (Q3, 7:10) USC – #25 Ronald Jones II 37 yard run, (#39 Matt Boermeester kick) – USC 35–17 (Q3, 1:18) USC – #88 Daniel Imatorbhebhe 17 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 42–17 (Q4, 13:22) CAL – #1 Melquise Stovall 16 yard pass from #7 Davis Webb, (#9 Matt Anderson kick) – USC 42–24 (Q4, 2:42) USC – #39 Matt Boermeester 32 yard Field Goal – USC 45–24 Oregon (Q1, 12:18) USC – #39 Matt Boermeester 35 yard Field Goal – USC 3–0 (Q1, 8:37) USC – #25 Ronald Jones II 23 yard run, (#39 Matt Boermeester kick) – USC 10–0 (Q1, 5:25) USC – #25 Ronald Jones II 3 yard run, (#39 Matt Boermeester kick) – USC 17–0 (Q1, 0:15) ORE – #6 Charles Nelson 25 yard run – USC 17–6 (Q2, 0:30) USC – #80 Deontay Burnett 3 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 24–6 (Q3, 13:51) USC – #25 Ronald Jones II 66 yard run, (#39 Matt Boermeester kick) – USC 31–6 (Q3, 9:57) ORE – #85 Pharaoh Brown 5 yard pass from #10 Justin Herbert, (#41 Aidan Schneider kick) – USC 31–13 (Q3, 2:47) USC – #48 Taylor McNamara 8 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 38–13 (Q4, 11:42) USC – #25 Ronald Jones II 1 yard run, (#39 Matt Boermeester kick) – USC 45–13 (Q4, 5:41) ORE – #9 Dakota Prukop 15 yard run, (#41 Aidan Schneider kick) – USC 45–20 Washington (Q1, 4:03) WASH – #48 Cameron Van Winkle 43 yard Field Goal – WASH 3–0 (Q1, 0:46) USC – #39 Matt Boermeester 38 yard Field Goal – USC 3–3 (Q2, 9:00) USC – #1 Darreus Rogers 13 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 10–3 (Q2, 2:39) WASH – #48 Cameron Van Winkle 39 yard Field Goal – USC 10–6 (Q2, 0:34) USC – #25 Ronald Jones II 4 yard run, (#39 Matt Boermeester kick) – USC 17–6 (Q3, 8:33) USC – #1 John Ross 70 yard pass from #3 Jake Browning, (#48 Cameron Van Winkle kick) – USC 17–13 (Q4, 13:39) USC – #88 Daniel Imatorbhebhe 8 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 24–13 (Q4, 0:58) USC – Team Safety – USC 26–13 UCLA (Q1, 13:41) UCLA – #2 Jordan Lasley 56 yard pass from #12 Mike Fafaul, (#17 JJ Molson kick) – UCLA 7–0 (Q1, 9:54) USC – #25 Ronald Jones II 1 yard run, (#39 Matt Boermeester kick) – USC 7–7 (Q2, 13:34) UCLA – #2 Jordan Lasley 7 yard pass from #12 Mike Fafaul, (#17 JJ Molson kick) – UCLA 14–7 (Q2, 11:19) USC – #25 Ronald Jones II 60 yard run, (#39 Matt Boermeester kick) – USC 14–14 (Q2, 8:18) USC – #13 De'Quan Hampton 31 yard pass from #14 Sam Darnold – USC 20–14 (Q2, 0:12) USC – #39 Matt Boermeester 30 yard Field Goal – USC 23–14 (Q3, 10:01) USC – #13 De'Quan Hampton 6 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 30–14 (Q4, 13:38) USC – #39 Matt Boermeester 32 yard Field Goal – USC 33–14 (Q4, 7:01) USC – #39 Matt Boermeester 25 yard Field Goal – USC 36–14 Notre Dame (Q1, 10:22) USC – #39 Matt Boermeester 37 yard Field Goal USC 3–0 (Q1, 9:51) ND – #14 DeShone Kizer 1 Yd Run (Justin Yoon Kick) ND 7–3 (Q1, 7:08) USC – #25 Ronald Jones II 51 yard run, (#39 Matt Boermeester kick) USC 10–7 (Q2, 1:24) USC – #2 Adoree' Jackson 55 Yd Punt Return (Matt Boermeester Kick) USC 17–7 (Q2, 1:07) USC – #27 Ajene Harris 33 Yd Interception Return (Matt Boermeester Kick) USC 24–7 (Q3, 10:08) ND – #10 Chris Finke 14 Yd pass from DeShone Kizer (Justin Yoon Kick) USC 24–14 (Q3, 7:41) USC – #2 Adoree' Jackson 52 Yd pass from Sam Darnold (Matt Boermeester Kick) USC 31–14 (Q3, 1:41) ND – #29 Kevin Stepherson 29 Yd pass from DeShone Kizer (Justin Yoon Kick) USC 31–21 (Q3, 0:47) USC – #2 Adoree' Jackson 97 Yd Kickoff Return (Matt Boermeester Kick) USC 38–21 (Q4, 9:40) USC – #9 Juju Smith-Schuster 2 Yd pass from Sam Darnold (Matt Boermeester Kick) USC 45–21 (Q4, 1:02) ND – #6 Equanimeous St. Brown 15 Yd pass from Malik Zaire (Two-Point Run Conversion Failed) USC 45–27 Penn State (Rose Bowl) (Q1, 10:39) USC – #80 Deontay Burnett 26 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 7–0 (Q1, 4:46) USC – #39 Matt Boermeester 22 yard Field Goal – USC 10–0 (Q1, 9:26) USC – #39 Matt Boermeester 44 yard Field Goal – USC 13–0 (Q2, 11:50) PSU – #26 Saquon Barkley 24 yard run, (#95 Tyler Davis kick) – USC 13–7 (Q2, 10:15) USC – #80 Deontay Burnett 3 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 20–7 (Q2, 8:49) PSU – #12 Chris Godwin 30 yard pass from #9 Trace McSorley, (#95 Tyler Davis kick) – USC 20–14 (Q2, 6:20) USC – #1 Darreus Rogers 3 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 27–14 (Q2, 1:00) PSU – #88 Mike Gesicki 11 yard pass from #9 Trace McSorley, (#95 Tyler Davis kick) – USC 27–21 (Q3, 13:22) PSU – #26 Saquon Barkley 79 yard run, (#95 Tyler Davis kick) – PSU 28–27 (Q3, 11:38) PSU – #12 Chris Godwin 72 yard pass from #9 Trace McSorley, (#95 Tyler Davis kick) – PSU 35–27 (Q3, 10:31) PSU – #9 Trace McSorley 3 yard run, (#95 Tyler Davis kick) – PSU 42–27 (Q3, 6:54) USC – #9 JuJu Smith-Schuster 13 yard pass from #14 Sam Darnold, (2pts Good) – PSU 42–35 (Q3, 2:00) PSU – #26 Saquon Barkley 7 yard pass from #9 Trace McSorley, (#95 Tyler Davis kick) – PSU 49–35 (Q4, 8:19) USC – #4 Ronald Jones II 3 yard run, (#39 Matt Boermeester kick) – PSU 49–42 (Q4, 1:27) USC – #80 Deontay Burnett 27 yard pass from #14 Sam Darnold, (#39 Matt Boermeester kick) – USC 49–49 (Q4, 0:00) USC – #39 Matt Boermeester 46 yard Field Goal – USC 52–49 Rankings Statistics Team As of November 20, 2016. Non-conference opponents Pac-12 opponents Offense Defense Key: POS: Position, SOLO: Solo Tackles, AST: Assisted Tackles, TOT: Total Tackles, TFL: Tackles-for-loss, SACK: Quarterback Sacks, INT: Interceptions, BU: Passes Broken Up, PD: Passes Defended, QBH: Quarterback Hits, FR: Fumbles Recovered, FF: Forced Fumbles, BLK: Kicks or Punts Blocked, SAF: Safeties, TD : Touchdown Special teams Awards and honors Coaches Clay Helton – Head Coach Paul "Bear" Bryant Award (Coach of the Year) : Finalist FWAA First Year Coach of Year Offense Sam Darnold – QB – Freshman Archie Griffin Award (most valuable player) : Winner Pac-12 Offensive Freshman of the Year Manning Award (quarterback) : Finalist Davey O'Brien Award (quarterback) : Semi-Finalist JuJu Smith-Schuster – WR – Junior POLY POY (Polynesian College Football Player of the Year) : Finalist Defense Adoree' Jackson – CB – Junior Jim Thorpe Award (defensive back) : Winner Pac-12 Defensive Player of The Year Paul Hornung Award (most versatile player) : Finalist College Sports Madness All-American Team First Team Zach Banner – OT – Senior (164th) Adoree' Jackson – CB – Junior (165th) Third Team Adoree' Jackson – KR & PR – Junior All-Pac-12 Individual Awards Offensive Freshman of the Year Sam Darnold – QB – Freshman Defensive Player of the Year Adoree' Jackson – CB – Junior PAC-12 All-Conference Team First Team Zach Banner – OT – Senior Chad Wheeler – OT – Senior Adoree' Jackson – CB & PR – Junior Second Team Ronald Jones II – TB – Sophomore JuJu Smith-Schuster – WR – Junior Damien Mama – OG – Junior Stevie Tu'ikolovatu – DT – Senior Cameron Smith – ILB – Sophomore Honorable Mention Sam Darnold – QB – Freshman Darreus Rogers – WR – Senior Daniel Imatorbhebhe – TE – Freshman Rasheem Green – DT – Sophomore Porter Gustin – DE – Sophomore Michael Hutchings – ILB – Senior Iman Marshall – CB – Sophomore Chris Hawkins – S – Junior Leon McQuay III – S – Senior PAC-12 All-Freshman Team First Team Sam Darnold – QB – Freshman Daniel Imatorbhebhe – TE – Freshman PAC-12 All-Academic Team Honorable Mention Sam Darnold – QB – Freshman Max Browne – QB – Junior Matt Boermeester – K – Junior Chris Tilbey – P – Sophomore USA Today Sports Freshman All-America Team Second Team Adoree' Jackson – RET – Junior FWAA Freshman All-America Team Sam Darnold – QB – Freshman Notes January 4, 2016 – Tyson Helton, Neil Callaway Officially Join USC Football Staff. January 4, 2016 – Keanu Saleapaga De-Commits from USC Football. January 8, 2016 – John Baxter Returns to Coach USC Special Teams. January 10, 2016 – Marques Tuiasosopo Leaves USC to be UCLA QB Coach. January 11, 2016 – USC Officially Hires Defensive Coordinator Clancy Pendergast. January 12, 2016 – Peter Sirmon Leaving USC to be Mississippi State DC. January 14, 2016 – USC hires Ronnie Bradford to Coach Defensive Backs. January 18, 2016 – Tommie Robinson returning to USC to Coach Tailbacks and Run Game Coordinator. January 22, 2016 – Connor Spears Steps Away From USC Football Team. January 22, 2016 – Velus Jones flips from USC to Oklahoma. January 23, 2016 – Velus Jones Announces Re-Commitment to USC from Oklahoma. January 23, 2016 – Keyshon Camp De-Commits From USC. January 26, 2016 – Kenechi Udeze Hired as USC Defensive Line Coach. February 3, 2016 – Jack Jones Commits to USC Football on Signing Day. February 3, 2016 – USC Finishes With Top 10 Signing Day 2016 Recruiting Class. February 4, 2016 – Tee Martin Named 247Sports 2016 Recruiter of the Year. February 5, 2016 – QB Matt Corral Commits for 2018. February 12, 2016 – USC to NFL: Seven Trojans Invited to 2016 NFL Combine. March 2, 2016 – Caleb Wilson to Transfer From USC to UCLA. March 26, 2016 – USC Football: Kenny Bigelow Has Torn ACL, Will Miss 2016 Season. April 1, 2016 – Scott Felix Loses NCAA Appeal, Ending USC Football Career. April 5, 2016 – Thomas Graham De-Commits From USC. April 6, 2016 – Bo Calvert Commits to the Trojans. April 8, 2016 – LB Raymond Scott Commits to Trojans for 2018. April 13, 2016 – USC Hires Lynn Swann as New Athletic Director. April 22, 2016 – USC Football Players Rap While Stuck in an Elevator (Video). April 23, 2016 – Juliano Falaniko Commits to Trojans. May 19, 2016 – 3-Star LB Daniel Green Commits. May 19, 2016 – USC Football Boasts Most Wins Over Ranked Opponents In CFB. June 8, 2016 – Nation's No. 2 Center Commits to USC over Michigan. June 10, 2016 – Andrew Vorhees Commits to 2017 USC Recruiting Class. June 14, 2016 – USC Offensive Line is Nation's 2nd Most Experienced. June 16, 2016 – Utah Defensive Tackle Steve Tu’ikolovatu Transfers To USC Football. June 17, 2016 – Three Future Contests With Fresno State Announced. June 20, 2016 – C.J. Miller, 3-Star Safety, Commits to USC Football. June 27, 2016 – Erik Krommenhoek, 3-Star TE, Commits to USC Football. July 3, 2016 – Adoree’ Jackson Doesn't Qualify for 2016 Olympic Games in Rio. July 11, 2016 – 5 Trojans Make SI's Top 100 College Football Players of 2016. July 11, 2016 – New USC Football Jerseys Unveiled for 2016 Season. July 12, 2016 – USC vs. Alabama: Trojans to Wear White as Designated Road Team. July 13, 2016 – Unwrapping the New 2016 USC Football Media Guide. July 15, 2016 – Bubba Bolden Decommitted From USC Recruiting Class. July 19, 2016 – Isaiah Langley Suspended for USC vs. Alabama. July 21, 2016 – USC Football Picture Day 2016: Photos, Videos and New Jerseys. July 30, 2016 – Wylan Free Commits to 2017 USC Recruiting Class. August 2, 2016 – USC Football Ranks No. 5 In All-Time AP Poll. August 3, 2016 – Juwan Burgess, 3-Star ATH, Commits to USC Football. August 4, 2016 – USC Football Announces Stevie Tu’ikolovatu, 5 New Players for 2016. August 5, 2016 – Alijah Vera-Tucker Commits to 2017 USC Recruiting Class. August 5, 2016 – Hunter Echols Commits to 2017 USC Recruiting Class. August 10, 2016 – Freshman Vavae Malepeai Out 6–8 Weeks. August 16, 2016 – Projecting 2016 Starting Lineup After Fall Camp. August 16, 2016 – Jacob Lichtenstein, 3-Star DE, Commits to USC Football. August 17, 2016 – Tyson Helton and a New Look for USC Offense in 2016. August 18, 2016 – Matt Lopes, Reuben Peters, James Toland Awarded Scholarships. August 20, 2016 – Max Browne Named USC Starting Quarterback. August 20, 2016 – James Lynch, 3-Star DT, Commits To USC Football. August 21, 2016 – USC Football Ranked No. 20 In Preseason AP Poll. August 21, 2016 – Clay Helton Releases First USC Depth Chart of 2016. August 29, 2016 – LB Osa Masina Suspended For Season Opener. August 31, 2016 – 2016 USC Football Team Captains Announced. September 2, 2016 – Don Hill Sent Home Before USC vs Alabama Game. September 3, 2016 – USC vs Alabama Score and Recap: Tide Roll Big Over Trojans. September 4, 2016 – Jabari Ruffin Suspended For 1st Half vs Utah State. September 6, 2016 – USC Football Drops Out of AP Poll After Alabama Loss. September 8, 2016 – Toa Lobendahn Tears ACL Again, Out For Season. September 10, 2016 – USC vs Utah Score: Trojans Win Big in Home Opener. September 11, 2016 – Chuma Edoga Avoids Suspension For USC vs Stanford. September 12, 2016 – Adoree’ Jackson Named Pac-12 Special Teams Player of the Week. September 13, 2016 – Osa Masina, Don Hill Suspended From All USC Football Team Activities. September 15, 2016 – USC Football Should Use Adoree’ Jackson To Stop Christian McCaffrey. September 17, 2016 – USC vs Stanford Final Score: Cardinal Cruise to Victory. September 19, 2016 – Sam Darnold Named USC Starting Quarterback vs. Utah. September 19, 2016 – Clay Helton's USC Quarterback Change Put Max Browne Out to Dry. September 19, 2016 – Clay Helton Picked Sam Darnold as USC Quarterback Too Late. September 20, 2016 – Osa Masina, Don Hill Removed From USC Football Roster. September 20, 2016 – Freshman EJ Price to Transfer From USC Football Team. September 23, 2016 – USC vs Utah Final Score and Recap: Trojans Collapse in Salt Lake. September 24, 2016 – USC Football Mistakes From Top To Bottom Exposed vs Utah. September 24, 2016 – Justin Davis Joins USC's Illustrious 2,000-Yard Club. September 24, 2016 – Clay Helton, USC Need an Unthinkable Turnaround After 1–3 Start. September 27, 2016 – Noah Jefferson To Be Held Out Until November. October 1, 2016 – USC vs ASU Final Score: Trojans Win a Blowout. October 8, 2016 – USC vs Colorado Final Score: Trojans Survive To Down Buffaloes. October 14, 2016 – USC Football Has Reason To Look Forward After Back-To-Back Wins. October 15, 2016 – USC vs Arizona Score: Trojans Roll Wildcats in a Blowout. October 16, 2016 – Steven Mitchell Jr. Out For Season With ACL Tear. October 24, 2016 – Adoree’ Jackson Named 2016 Thorpe Award Semifinalist. October 27, 2016 – USC vs Cal Score: Trojans Roll Over Bears At Coliseum. November 1, 2016 – Max Browne Granted Permission To Seek Transfer From USC Football. November 5, 2016 – USC vs Oregon Score: Trojans Blow Out the Ducks. November 6, 2016 – Defense Comes Full Circle in Shutdown of Oregon Ducks. November 8, 2016 – USC Football Ranked No. 20 In College Football Playoff Rankings. November 10, 2016 – Sam Darnold Named Semifinalist for Davey O’Brien Award. November 12, 2016 – Redemption on the Line for Clay Helton, USC Football vs. Washington. November 12, 2016 – USC vs Washington: Lee Corso Picks Huskies On College Gameday. November 12, 2016 – USC vs Washington Score: Trojans Topple the No. 4 Huskies. November 13, 2016 – QB Jack Sears Commits To 2017 USC Recruiting Class, Flips From Duke. November 13, 2016 – It's Time To Give Clay Helton, USC Football Their Due Praise. November 14, 2016 – Sam Darnold Named Pac-12 Player of the Week After USC Win Over UW. November 15, 2016 – Clay Helton's USC Football Pulls Massive Upset In Seattle. November 15, 2016 – USC Football Jumps To No. 13 In College Football Playoff Rankings. November 17, 2016 – Adoree’ Jackson Named Finalist For 2016 Paul Hornung Award. November 19, 2016 – USC vs UCLA Score: Trojans Rout Bruins For 7th Straight Win. November 20, 2016 – USC Blowout of UCLA Fueled By Pete Carroll Style Run. November 21, 2016 – Adoree’ Jackson Named Finalist For Thorpe Award. November 22, 2016 – Juju Smith-Schuster Named Finalist For Polynesian Player Of The Year Award. November 22, 2016 – USC Football Moves Up To No. 12 In College Football Playoff Ranking. November 26, 2016 – Colorado Beats Utah, USC Football Falls Short Of Pac-12 South Title. November 26, 2016 – USC vs Notre Dame Score: Trojans Beat Irish, Win 8th-Straight Game. November 28, 2016 – Two Return TDs Earn Adoree’ Jackson Pac-12 Player Of The Week Honors. November 29, 2016 – Adoree’ Jackson, Sam Darnold Lead USC's 2016 All-Pac-12 Honors. November 29, 2016 – Noah Jefferson Reportedly Transferring From USC Football. November 29, 2016 – USC Football Up To No. 11 In College Football Playoff Rankings. November 30, 2016 – USC's Sam Darnold Named Finalist For 2016 Manning Award. November 30, 2016 – Adoree’ Jackson Named Finalist For Lott IMPACT Trophy. December 1, 2016 – Former USC RB Joe McKnight Killed in New Orleans at 28. December 4, 2016 – USC Football Ranked No. 9 In Week 15 AP Poll. December 4, 2016 – 2017 Rose Bowl Odds: USC Favored Over Penn State. December 5, 2016 – Juju Smith-Schuster Teases Return For Senior Season At USC. December 5, 2016 – Adoree’ Jackson Snubbed From Heisman Trophy Finalists. December 5, 2016 – 3-Star LB Daniel Green De-Commits. December 7, 2016 – Adoree’ Jackson, Zach Banner Named First Team All-Americans. December 8, 2016 – Adoree’ Jackson Says USC Comeback “Really On The Table”. December 8, 2016 – Adoree’ Jackson Wins 2016 Jim Thorpe Award. December 8, 2016 – 3-Star CB Wylan Free De-Commits. December 10, 2016 – James Lynch De-Commits from 2017 USC Recruiting Class. December 10, 2016 – Marcus Johnson Commits to 2018 USC Recruiting Class. December 11, 2016 – Juju Smith-Schuster To Announce NFL Decision After Rose Bowl. December 11, 2016 – What if Adoree’ Jackson and Juju Smith-Schuster Returned to USC in 2017?. December 12, 2016 – Top 10 USC Football Players of 2016. December 12, 2016 – Adoree’ Jackson Earns Consensus All-American Honors. December 13, 2016 – Jalen Greene To Serve As Back Up QB For Rose Bowl. December 13, 2016 – USC Injury Report: Jonathan Lockett To Undergo Season-Ending Surgery. December 14, 2016 – USC's Adoree’ Jackson Earns Unanimous All-American Honors. December 14, 2016 – Clay Helton Named Finalist For Bear Bryant Coach Of The Year Award. December 14, 2016 – USC Football Awards 2016: Adoree’ Jackson Repeats As Team MVP. December 14, 2016 – USC Injury Report: Nathan Smith Tears ACL. December 15, 2016 – USC Football Is Back In The Rose Bowl and Back On Track. December 15, 2016 – Former USC QB Max Browne to Transfer to Pitt. December 15, 2016 – Tayler Katoa Commits to 2017 USC Recruiting Class. December 16, 2016 – USC Left Tackle Chad Wheeler Named First Team All-American. December 17, 2016 – Zach Banner's All-American Nod Pays Off USC Career In Tumultuous Era. December 19, 2016 – USC OT Zach Banner Accepts Invitation To 2017 Reese's Senior Bowl. December 20, 2016 – USC NFL: Three Trojans Make 2017 Pro Bowl. December 22, 2016 – Terrance Lang Commits to 2017 USC Recruiting Class. December 22, 2016 – Sam Darnold, Daniel Imatorbhebhe Named to Athlon All-Freshman Team. December 27, 2016 – Olajuwon Tucker, Kevin Scott Academically Ineligible for 2017 Rose Bowl. December 28, 2016 – Penn State Suspends Two Players for 2017 Rose Bowl vs. USC. December 28, 2016 – Long snapper Johnson becomes part of 2017 class. December 29, 2016 – USC Football Sending Four Players to East-West Shrine Game. December 30, 2016 – Khaliel Rodgers To Transfer Away From USC Football. References USC USC Trojans football seasons Rose Bowl champion seasons USC Trojans football USC Trojans football
2798525
https://en.wikipedia.org/wiki/EmperorLinux
EmperorLinux
EmperorLinux, Inc. is a reseller who, according to PC Magazine, "specialize in the sales of pre-configured Linux laptops for companies and individuals that want stable, easy-to use laptops". EmperorLinux was founded in 1999 by Lincoln Durey, an EE Ph.D. from Tulane University. The company's first product was the BlackPerl Linux laptop, based on a Sony VAIO 505TR with a highly modified Linux kernel. Since 1999, the company has added a range of IBM ThinkPads, Dell Latitudes, and Sharp laptops to its lineup. These laptops are available with most major Linux distributions, including Fedora, RHEL, Debian, Ubuntu, and SuSE. Significant improvements to stock Linux distributions come from the empkernel and a carefully configured /etc directory. Supported features include APM and ACPI suspend and hibernate support, CPU throttling, LCD backlight brightness control, wireless, and generally full support of the hardware under Linux. The company is privately held and based in Atlanta, Georgia, US. See also Free software References External links Linux companies Privately held companies based in Georgia (U.S. state) Companies based in Atlanta Computer companies established in 1999 1999 establishments in Georgia (U.S. state)
355054
https://en.wikipedia.org/wiki/CAcert.org
CAcert.org
CAcert.org is a community-driven certificate authority that issues free X.509 public key certificates. CAcert.org heavily relies on automation and therefore issues only Domain-validated certificates (and not Extended validation or Organization Validation certificates). These certificates can be used to digitally sign and encrypt email, code, and documents, and to authenticate and authorize user connections to websites via TLS/SSL. CAcert Inc. Association On 24 July 2003, Duane Groth incorporated CAcert Inc. as an non-profit association registered in New South Wales (Australia). CAcert Inc runs CAcert.org -- a community-driven certificate authority. In 2004, the Dutch Internet pioneer Teus Hagen became involved. He served as board member and, in 2008, as president. Certificate Trust status A disadvantage of CAcert.org is, that its root certificates are not included in the most widely deployed certificate stores and it has to be added by its customers. As of 2021, most browsers, email clients, and operating systems don't automatically trust certificates issued by CAcert. Thus, users receive an "untrusted certificate" warning upon trying to view a website providing X.509 certificate issued by CAcert, or view emails authenticated with CAcert certificates in Microsoft Outlook, Mozilla Thunderbird. CAcert uses its own certificate on its website. Web Browsers Discussion for inclusion of CAcert root certificate in Mozilla Application Suite and Mozilla Firefox started in 2004. Mozilla had no CA certificate policy at the time. Eventually, Mozilla developed a policy which required CAcert to improve their management system and conduct audits. In April 2007, CAcert formally withdrew its application for inclusion in Mozilla root program. At the same time, the CA/Browser Forum was established to facilitate communication among browser vendors and Certificate Authorities. Mozilla's advice was incorporated into "Baseline Requirements" used by most major browser vendors. Progress toward meeting Mozilla and "Baseline Requirements" requirements and a new request for inclusion can hardly be expected in the near future. Operating systems FreeBSD included CAcert's root certificate but removed it in 2008, following Mozilla's policy. In 2014, CAcert removed from Ubuntu, Debian, and OpenBSD root stores. In 2018, CAcert was removed from Arch Linux. As of Feb 2022, the following operating systems or distributions include the CAcert root certificate by default: Arch Linux Ark Linux FreeWRT Gentoo (app-misc/ca-certificates only when USE flag cacert is set, defaults OFF from version 20161102.3.27.2-r2 ) GRML Knoppix Mandriva Linux MirOS BSD Openfire Privatix Replicant (Android) As of 2021, the following operating systems or distributions have an optional package with the CAcert root certificate: Debian openSUSE Web of trust To create higher-trust certificates, users can participate in a web of trust system whereby users physically meet and verify each other's identities. CAcert maintains the number of assurance points for each account. Assurance points can be gained through various means, primarily by having one's identity physically verified by users classified as "Assurers". Having more assurance points allows users more privileges such as writing a name in the certificate and longer expiration times on certificates. A user with at least 100 assurance points is a Prospective Assurer, and may—after passing an Assurer Challenge—verify other users; more assurance points allow the Assurer to assign more assurance points to others. CAcert sponsors key signing parties, especially at big events such as CeBIT and FOSDEM. As of 2021, CAcert's web of trust has over 380,000 verified users. Root certificate descriptions Since October 2005, CAcert offers Class 1 and Class 3 root certificates. Class 3 is a high-security subset of Class 1. See also Let's Encrypt CAcert wiki Further reading References Cryptography organizations Certificate authorities Transport_Layer_Security Information privacy Safety_engineering
42100171
https://en.wikipedia.org/wiki/Agent.BTZ
Agent.BTZ
Agent.BTZ, also named Autorun, is a computer worm that infects USB flash drives with spyware. A variant of the SillyFDC worm, it was used in a massive 2008 cyberattack on the US military. Technical description The Agent.BTZ worm is a DLL file, written in Assembler (x86-32 bit). It spreads by creating an AUTORUN.INF file to the root of each drive with the DLL file. It has the ability "to scan computers for data, open backdoors, and send through those backdoors to a remote command and control server." A malware sample is available at GitHub History In 2008, at a US military base in the Middle East, a USB flash drive infected with Agent.BTZ was inserted into a laptop attached to United States Central Command. From there it spread undetected to other systems, both classified and unclassified. In order to try and stop the spread of the worm, the Pentagon banned USB drives and removable media devices. They also disabled the Windows autorun feature on their computers. The Pentagon spent nearly 14 months cleaning the worm from military networks. Attribution Russian hackers were thought to be behind the attack because they had used the same code that made up Agent.BTZ in previous attacks. According to an article in The Economist, "it is not clear that agent.btz was designed specifically to target military networks, or indeed that it comes from either Russia or China." An article in the Los Angeles Times reported that US defense officials described the malicious software as "apparently designed specifically to target military networks." It's "thought to be from inside Russia", although it was not clear "whether the destructive program was created by an individual hacker or whether the Russian government may have had some involvement." In 2010, American journalist Noah Shachtman wrote an article to investigate the theory that the worm was written by a single hacker. Later analyses by Kaspersky Lab found relations to other spyware, including Red October, Turla, and Flame. In December 2016, the United States FBI and DHS issued a Joint Analysis Report which included attribution of Agent.BTZ to one or more "Russian civilian and military intelligence Services (RIS)." References Spyware Computer worms
43749863
https://en.wikipedia.org/wiki/The%20Cervantes%20Group
The Cervantes Group
The Cervantes Group is an international information technology consulting and talent acquisition firm with offices in San Juan, Querétaro, Mexico, Madrid, Spain, Boston and Chicago. Specializations include software development, project management, business analysis, mobile application development and program management. History The Cervantes Group is an international information technology consulting and talent acquisition corporation that does business across the U.S, Puerto Rico, and Europe. It was founded in 2004 by Joanna Bauzá González, professional tennis player and entrepreneur from Guaynabo, Puerto Rico, and Timothy B. Mullen of Evanston, Illinois, originally operating from their house in San Juan, Puerto Rico. In 2011, Joanna Bauzá and Timothy B. Mullen received the "Zenit Teodoro Moscoso Award", which is the young entrepreneur of the year award given by governor of Puerto Rico, honorable Luis Guillermo Fortuño Burset. That same year, The Cervantes Group was placed at number 1465 on Inc.'s magazine 5000 list of the fastest-growing private companies in America based on an annual growth over 191%. Again, The Cervantes Group made the prestigious list of Inc. Magazine's top 5000 fastest growing private companies in the U.S. at #3,945. On April 24, 2015, the President of Marquette University, Dr. Michael Lovell, conferred the Spirit of Marquette Award to the founders of The Cervantes Group, Mullen and Bauzá. An "All-University" Award it was given by Marquette University and is one of only six recipients for the entire 2015 calendar year. On April 30, 2015, The Cervantes Group received the award for "Excellence in Quality of Service in 2015" by the Puerto Rico Product Association, during their annual award ceremony "Made in Puerto Rico Day". On September 2016, Joanna Bauzá was named part of the Board of Trustees for Marquette University . Affiliates The Cervantes Group LLC – U.S. The Cervantes Group Inc. – Puerto Rico The Cervantes Group S.L. – Europe References External links Article: Keeping Bar High Companies based in San Juan, Puerto Rico Information technology consulting firms of the United States
30873746
https://en.wikipedia.org/wiki/Interior%20gateway%20protocol
Interior gateway protocol
An interior gateway protocol (IGP) is a type of routing protocol used for exchanging routing table information between gateways (commonly routers) within an autonomous system (for example, a system of corporate local area networks). This routing information can then be used to route network-layer protocols like IP. Interior gateway protocols can be divided into two categories: distance-vector routing protocols and link-state routing protocols. Specific examples of IGPs include Open Shortest Path First (OSPF), Routing Information Protocol (RIP), Intermediate System to Intermediate System (IS-IS) and Enhanced Interior Gateway Routing Protocol (EIGRP). By contrast, exterior gateway protocols are used to exchange routing information between autonomous systems and rely on IGPs to resolve routes within an autonomous system. Types Distance-vector routing protocol Distance-vector routing protocols use the Bellman–Ford algorithm. In these protocols, each router does not possess information about the full network topology. It advertises its distance value (DV) calculated to other routers and receives similar advertisements from other routers unless changes are done in the local network or by neighbours (routers). Using these routing advertisements each router populates its routing table. In the next advertisement cycle, a router advertises updated information from its routing table. This process continues until the routing tables of each router converge to stable values. Some of these protocols have the disadvantage of slow convergence. Examples of distance-vector routing protocols: Routing Information Protocol (RIP) Routing Information Protocol Version 2 (RIPv2) Routing Information Protocol Next Generation (RIPng), an extension of RIP version 2 with support for IPv6 Interior Gateway Routing Protocol (IGRP) Link-state routing protocol In link-state routing protocols, each router possesses information about the complete network topology. Each router then independently calculates the best next hop from it for every possible destination in the network using local information of the topology. The collection of best-next-hops forms the routing table. This contrasts with distance-vector routing protocols, which work by having each node share its routing table with its neighbours. In a link-state protocol, the only information passed between the nodes is the information used to construct the connectivity maps. Examples of link-state routing protocols: Open Shortest Path First (OSPF) Intermediate system to intermediate system (IS-IS) Advanced distance vector routing protocol Advanced distance vector routing protocols have both the features of distance vector routing protocols and link-state routing protocols. One example is Enhanced Interior Gateway Routing Protocol (EIGRP). See also Route analytics Routing protocols
46203249
https://en.wikipedia.org/wiki/Jenny%20LeClue
Jenny LeClue
Jenny LeClue: Detectivú is an adventure game developed and published by Mografi. It was released for iOS (as a launch title for Apple Arcade), Microsoft Windows, macOS, and Linux in September 2019, and will be released for PlayStation 4 at a later date. The Spoken Secrets Edition added full voice acting as a free update on July 24, 2020. The game was released for Nintendo Switch on August 25, 2020. Gameplay The choices that players make throughout the game have an impact. The game contains point-and-click elements, and features a hand-drawn art style. Players are able to interact with most items in the game. During dialogue sections, players can observe the person talking to spot clues as to whether they are lying. Locations that players explore include mines, graveyards, mountains, police stations, and libraries. Plot Author Arthur K. Finklestein is the creator of the Jenny LeClue series of books, a classic children's sleuth series whose sales are dwindling as readers begin to find the stories trite and boring. His publisher demands a darker, grittier Jenny LeClue book or else the series will be canceled, so Finklestein reluctantly responds by giving Jenny her biggest case yet: the murder of the beloved dean of local Gumboldt College, with her forensic science professor mother as the prime suspect. Within the narrative, Jenny must clear her mom's name and repair her fractured relationship with the dean's son, while Finklestein struggles to commit to putting his beloved Jenny and her town of Arthurton through the wringer in order to keep the series alive. Development The game had a successful crowdfunding campaign. With a goal of $65,000, it reached $105,797 by August 2014. It was released for iOS, Microsoft Windows, macOS, and Linux on September 19, 2019, it was released for Nintendo Switch on August 25, 2020 and will be released for PlayStation 4 later in the near future. For the iOS version, it was released on the Apple Arcade service. References External links 2019 video games Apple Arcade games Crowdfunded video games Detective video games Indie video games IOS games Kickstarter-funded video games Linux games MacOS games Nintendo Switch games PlayStation 4 games Point-and-click adventure games Video games featuring female protagonists Windows games
36292664
https://en.wikipedia.org/wiki/Webcam%20Social%20Shopper
Webcam Social Shopper
The Webcam Social Shopper, often referred to as virtual dressing room software, debuted online in June 2009 and was created by Los Angeles-based software company, Zugara. Cited initially as an "Augmented Reality Dressing Room", The Webcam Social Shopper allows online shoppers to use a webcam to visualize virtual garments on themselves while shopping online. The software also uses a motion capture system that allows users to use hand motions to navigate the software while standing back from their computer. Social media integration with Facebook and Twitter also allows users of the software to send pictures of themselves with the virtual garments for immediate feedback. Though the Webcam Social Shopper has also been called virtual fitting room or virtual dressing room software, Zugara has referred to the software as an advanced product visualization tool for retailers. On September 25, 2012, Zugara was granted US Patent No. 8,275,590 for "Providing a simulation of wearing items such as garments and/or accessories". The patent relates to Zugara's augmented reality social commerce platform, the Webcam Social Shopper. Technology The Webcam Social Shopper software can utilize 2-dimensional webcams or 3-dimensional or depth sensing cameras like Microsoft's Kinect. History In November 2009, the Webcam Social Shopper was first deployed as Fashionista by online fashion site, Tobi.com. This initial version of the Webcam Social Shopper, used an augmented reality marker for placement of the virtual garment on the subject. In February 2011, a new version of the Webcam Social Shopper was debuted publicly for the first time at the DEMO conference in Palm Springs, California. This latest version of the software removed the need for a marker and instead used facial tracking for placement of the virtual garment. Dubbed the "Plug and Play" version of the Webcam Social Shopper, this version of the software was designed for easier integration for retailers and ecommerce sites. UK Fashion Retailer, Banana Flame, was the first retailer to integrate the Plug and Play version of the virtual dressing room software. According to Matthew Szymczyk, CEO of Zugara, the new version of the Webcam Social Shopper can be integrated by a retailer in less than a day. Banana Flame deployed the software to offer a virtual dressing room for online shoppers to "try on" the clothes virtually on Banana Flame's website. On July 10, 2012, Zugara released an API for the Webcam Social Shopper for ecommerce platform integration. PrestaShop began offering the new Webcam Social Shopper module to its 127,000 retailers that same month. On October 3, 2013, Zugara released a Kinect enabled version of the Webcam Social Shopper software called, "WSS For Kiosks". On December 10, 2013, PayPal debuted a mobile payments enhanced version of WSS For Kiosks at the LeWeb conference in Paris. Initial criticism Virtual Dressing Room software, such as the Webcam Social Shopper, received early criticism for not helping users determine the fit of a virtual garment. However, subsequent users of the software have noted its effectiveness when matching colors while trying on dresses. The company has stated that the software simulates the "at the rack" moment found in physical retail stores when shoppers hold items of clothing up to themselves. With the growing popularity of ecommerce, The Webcam Social Shopper has also been cited as a tool for online shoppers that might potentially decrease sales at brick and mortar retailers. Reviews TIME magazine cited the Webcam Social Shopper as one of the few useful Augmented Reality applications that could be advantageous to both retailers and consumers. The New York Times also addressed the Webcam Social Shopper software in an article entitled, "Brands Embrace An Augmented Reality". In the article, a young woman used the Webcam Social Shopper software on Banana Flame's retail site and ended up buying a dress as it helped her make a purchase decision. Major news sources such as CNN, have also profiled The Webcam Social Shopper in a segment on Augmented Reality. Results Internet Retailer published a report on Virtual Fitting Rooms and Fit Simulators on February 1, 2012. Danish social shopping comparison site LazyLazy.com deployed the Webcam Social Shopper in late 2011 and saw its conversion rate immediately jump with 17% of shoppers using the software converting 2 to 3 times more than those who did not use the software. In February 2012, the Mattel brand, Barbie, used a kiosk-enabled version of the Webcam Social Shopper for a New York Fashion Week event where attendees could try on virtual Barbie outfits. Data released by Zugara, showed that the web version of the Barbie Dream Closet software showed increased usage over a 3-month period. From February 2012 to April 2012 use of the software increased from 20% to 33% and 50% of those users took an average of 6 photos each. External links Online retailers featuring the Webcam Social Shopper on their online stores: Barbie's Dream Closet Official Website (United States) LazyLazy (Denmark) K-Bee Leotards (United States) Zawara (Malaysia) FRENCH MODE - French Luxury Fashion Designers and Web Agency (Worldwide) PrestaShop (Worldwide) La Mania (Poland) Scrubtastic (United States) LoWed (Italy) Lookz (Russia) Other Links: Zugara Company Website Webcam Social Shopper Official Website References Augmented reality Online clothing retailers Business software
18164072
https://en.wikipedia.org/wiki/7th%20Expeditionary%20Airborne%20Command%20and%20Control%20Squadron
7th Expeditionary Airborne Command and Control Squadron
The 7th Expeditionary Airborne Command and Control Squadron is part of the 379th Air Expeditionary Wing at Al Udeid Air Base, Qatar. It operates the E-8 Joint STARS aircraft, conducting airborne command and control missions. The squadron has performed the airborne command and control mission since 1968, when it was activated in Vietnam. In 1985, the squadron was consolidated with three earlier units: The 7th Ferrying Squadron, which helped deliver aircraft to the Soviet Union from 1942 until 1944; the 7th Combat Cargo Squadron, which performed combat airlift missions in the Southwest Pacific Theater from 1944 until V-J Day, then became part of the Occupation Forces in Japan until inactivating in 1948; and the 7th Air Transport Squadron, Special, which provided airlift support for the United States' special weapons porograms from 1954 to 1966. History World War II ferrying operations The squadron's first predecessor was activated at Seattle Airport, Washington in March 1942 as the 7th Ferrying Squadron. The 7th ferried lend-lease aircraft to Alaska for turnover to the Soviet Union from June 1942 until disbanding in March 1944. Southwest Pacific combat airlift The second predecessor of the squadron was activated at Syracuse Army Air Base, New York on 1 May 1944 as the 7th Combat Cargo Squadron. It deployed to the Southwest Pacific Theater later that year and performed airlift until September 1945. It became part of the Occupation Forces in Japan until inactivating in early 1946. It was disbanded in inactive status on 8 October 1948. Special weapons airlift The 7th Logistic Support Squadron is the squadron's third predecessor. It was established at Robins Air Force Base, Georgia in 1954 as an Air Materiel Command unit. Its mission was to provide worldwide airlift of nuclear weapons and related equipment, with a secondary mission to airlift other Department of Defense cargo as required when space was available, using its Douglas C-124 Globemaster IIs. The squadron also provided airlift support during Cuban Missile Crisis from 17–28 October 1962. In 1963, the squadron was transferred to Military Air Transport Service (MATS) in a trial to see if MATS airlift units could perform the special weapons transport mission. C-124 Globemaster II strategic transport squadron flying worldwide airlift operations. A year later it became the 7th Air Transport Squadron, Special. The squadron was inactivated on 8 January 1966, when MATS became Military Airlift Command and its squadrons became Military Airlift Squadrons. Its personnel and equipment were transferred to the 58th Military Airlift Squadron, which was simultaneously activated. Airborne command and control The 7th Airborne Command and Control Squadron was activated at Da Nang Air Base, South Vietnam in March 1968 and performed airborne battlefield command and control (ABCCC) mission in Southeast Asia from its activation until 15 August 1973 and controlled airborne forces during the recovery of the SS Mayagüez in May 1975, in Grenada from, 23 October–21 November 1983, in Panama from, December 1989–January 1992, and in Southwest Asia from, 1 September 1990 – 16 March 1991. In 1994, the 7th flag was moved from Keesler Air Force Base, Mississippi, to Offutt Air Force Base, Nebraska where it transitioned from Lockheed EC-130 aircraft flying the ABCCC mission to the Boeing EC-135 aircraft flying the Operation Looking Glass mission in support of nuclear command and control for United States Strategic Command. The EC-130E aircraft and all squadron personnel moved to Davis-Monthan Air Force Base, Arizona where they continued performing the ABCCC mission as the 42d ACCS. In October 1998, the Looking Glass mission was transferred to the Navy's Boeing E-6 Mercury fleet, the last of the US Air Force's EC-135 fleet was retired, and the 7th was inactivated. In March 2008, the unit was converted to provisional status and reactivated - this time as the 7th Expeditionary Airborne Command and Control Squadron to be the forward operating squadron for E-8 Joint STARS, supporting the United States Central Command Area of Responsibility. Lineage 7th Ferrying Squadron Constituted as the 7th Air Corps Ferrying Squadron on 18 February 1942 Activated on 24 March 1942 Redesignated 7th Ferrying Squadron on 12 May 1943 Disbanded on 1 April 1944 Reconstituted and consolidated with the 7th Combat Cargo Squadron, the 7th Air Transport Squadron and the 7th Airborne Command and Control Squadron as the 7th Airborne Command and Control Squadron on 19 September 1985 7th Combat Cargo Squadron Constituted as the 7th Combat Cargo Squadron on 25 April 1944 Activated on 1 May 1944 Inactivated on 15 January 1946 Disbanded on 8 October 1948 Reconstituted and consolidated with the 7th Ferrying Squadron, the 7th Air Transport Squadron and the 7th Airborne Command and Control Squadron as the 7th Airborne Command and Control Squadron on 19 September 1985 7th Air Transport Squadron Constituted as the 7th Logistics Support Squadron on 22 June 1954 Activated on 18 October 1954 Redesignated 7th Air Transport Squadron, Special on 1 July 1964 Discontinued and inactivated on 8 January 1966 Consolidated with the 7th Ferrying Squadron, the 7th Combat Cargo Squadron and the 7th Airborne Command and Control Squadron as the 7th Airborne Command and Control Squadron on 19 September 1985 7th Expeditionary Airborne Command and Control Squadron Constituted as the 7th Airborne Command and Control Squadron and activated on 13 February 1968 (not organized) Organized on 1 March 1968 Consolidated with the 7th Ferrying Squadron, the 7th Combat Cargo Squadron and the 7th Air Transport Squadron on 19 September 1985 Inactivated on 1 October 1998 Redesignated 7th Expeditionary Airborne Command and Control Squadron and converted to provisional status on 19 March 2008 Activated on 27 March 2008 Assignments Northwest Sector, Ferrying Command (later 7th Ferrying Group), 18 February 1942 – 1 April 1944 2d Combat Cargo Group, 1 May 1944 – 15 January 1946 Warner Robins Air Materiel Area, 18 October 1954 3079th Aviation Depot Wing, 6 February 1955 39th Logistics Support Group, 1 July 1962 62d Troop Carrier Wing, 1 July 1963 63d Troop Carrier Wing, 1 July 1964 – 8 January 1966 Pacific Air Forces, 13 February 1968 (not organized) Seventh Air Force, 1 March 1968 432d Tactical Reconnaissance Wing, 31 October 1968 (attached to Seventh Air Force) 388th Tactical Fighter Wing, 30 April 1972 (attached to Seventh Air Force to 15 August 1973, US Support Activities Group/Seventh Air Force to c. 21 May 1974) 374th Tactical Airlift Wing, 22 May 1974 (attached to Thirteenth Air Force) 3d Tactical Fighter Wing, 31 March 1975 (attached to Thirteenth Air Force) 507th Tactical Air Control Group, 14 August 1975 552d Airborne Warning and Control Wing (later 552d Airborne Warning and Control Division), 1 October 1976 28th Air Division, 1 April 1985 (attached to Air Division Provisional, 15, 5 December 1990 – c. 16 March 1991) 552d Operations Group, 29 May 1992 55th Operations Group, 19 July 1994 – 1 October 1998 Air Combat Command to activate or inactivate any time after 19 March 2008 379th Air Expeditionary Wing, 27 March 2008 – present Stations Seattle Airport, Washington, 24 March 1942 Gore Field, Montana, 22 June 1942 – 1 April 1944 Syracuse Army Air Base, New York, 1 May 1944 Baer Field, Indiana, 7–27 October 1944 Mokmer Airfield, Biak, Netherlands East Indies, 11 November 1944 Dulag Airfield, Leyte, Philippine Islands, May 1945 Okinawa, 19 August 1945 Yokota Air Base, Japan, 22 September 1945 – 15 January 1946 Robins Air Force Base, Georgia, 19 October 1954 – 8 January 1966 Da Nang Air Base, South Vietnam, 1 March 1968 (operated from Udorn Royal Thai Air Force Base, Thailand) Udorn Royal Thai Air Force Base, Thailand, 31 October 1968 Korat Royal Thai Air Force Base, Thailand, 15 April 1972 Clark Air Base, Philippines, 22 May 1974 – 14 August 1975 Keesler Air Force Base, Mississippi 14 August 1975 – 18 July 1994 Deployed at Sharjah, United Arab Emirates, 1–25 September 1991 Deployed at Riyadh, Saudi Arabia, 25 September 1990–16 March 1991 Deployed at Aviano AB Italy, Support Bosina. 12 April 1993-18 July 1994 Offutt Air Force Base, Nebraska, 19 July 1994 – 1 October 1998 Al Udeid Air Base, Qatar, 27 March 2008 – present Aircraft Douglas C-47 Skytrain (1944–1945) Curtiss C-46 Commando (1944–1945) Douglas C-124 Globemaster II (1954–1966) Lockheed EC-130 (1968–1994) Boeing EC-135 (1994–1998) Boeing E-8 Joint STARS (2008–present) Operations World War II Vietnam War Operation Urgent Fury Operation Just Cause Operation Desert Storm Operation Deny Flight Operation Deliberate Force Operation Decisive Endeavor Operation Uphold Democracy Operation Iraqi Freedom Operation Enduring Freedom See also Organization of United States Air Force Units in the Gulf War) References Notes Explanatory notes Citations Bibliography 0007 Command and control squadrons of the United States Air Force
51682417
https://en.wikipedia.org/wiki/OGAS
OGAS
OGAS (, "National Automated System for Computation and Information Processing") was a Soviet project to create a nationwide information network. The project began in 1962 but was denied necessary funding in 1970. It was one of a series of attempts to create a nationwide network analogous to what became the Internet, all of which failed. Concept The primary architect of OGAS was Viktor Glushkov. A previous proposal for a national computer network to improve central planning, Anatoly Kitov's Economic Automated Management System, had been rejected in 1959 because of concerns in the military that they would be required to share information with civilian planners. Glushkov proposed OGAS in 1962 as a three-tier network with a computer centre in Moscow, up to 200 midlevel centres in other major cities, and up to 20,000 local terminals in economically significant locations, communicating in real time using the existing telephone infrastructure. The structure would also permit any terminal to communicate with any other. Glushkov further proposed using the system to move the Soviet Union towards a moneyless economy, using the system for electronic payments. The project failed because Glushkov's request for funding on 1 October 1970 was turned down. The 24th Communist Party Congress in 1971 was to have authorised implementation of the plan, but ultimately endorsed only expansion of local information management systems. Glushkov subsequently pursued another network plan, EGSVT, which was also underfunded and not carried out. The OGAS proposal was resented by some liberals as excessive central control, but failed primarily because of bureaucratic infighting: it was under the auspices of the Central Statistical Administration and as such fell afoul of Vasily Garbuzov, who saw a threat to his Ministry of Finance. When EGSVT failed, the next attempt (SOFE) was done in 1964 by Nikolay Fedorenko, who attempted to build an information network that could be used in economic planning in Soviet Union's planned economy. The project was successful at a micro-level but did not spread into wide use. Cybernetic economic planning Beginning in the early 1960s, the Communist Party of the Soviet Union considered moving away from the existing Stalinist command planning in favor of developing an interlinked computerized system of resource allocation based on the principles of Cybernetics. This development was seen as the basis for moving toward optimal planning that could form the basis of a more highly developed form of socialist economy based on informational decentralization and innovation. This was seen as a logical progression given that the material balances system was geared toward rapid industrialization, which the Soviet Union had already achieved in the preceding decades. But by the early 1970s the idea of transcending the status quo was abandoned by the Soviet leadership, who felt the system threatened Party control of the economy. By the early 1970s official interest in this system ended. By the end of 1970s the "natural" development of Soviet computers lead to creation of the project called Academset aimed at construction of nationwide optic fiber and radio/satellite digital network but only the Leningrad part of it was actually implemented before dissolution of the USSR. By 1992 Soviet computers serving it were destroyed and in 1990 USSR/Russia obtained a state-independent global Internet connection via telephone to Finland due to efforts of a private telecom enterprise called Relcom. 2016 book In 2016, a book dedicated to OGAS was published in the US, entitled "How Not to Network a Nation: The Uneasy History of the Soviet Internet", by Benjamin Peters, professor at University of Tulsa. Harvard University professor Jonathan Zittrain resumed that the book "fills an important gap in the Internet's history, highlighting the ways in which generativity and openness have been essential to networked innovation". A reviewer of the book at MIT wrote: "Soviet attempts to build a national computer network were undone by socialists who seemed to behave like capitalists". See also ARPANET Project Cybersyn Academset References Computer networks Science and technology in the Soviet Union Cybernetics Communications in the Soviet Union Economic planning Information management Socialism
23648411
https://en.wikipedia.org/wiki/LEd
LEd
LEd (formerly LaTeX Editor) is a TeX and LaTeX editing software working under Microsoft Windows. It is a freeware product. LEd offers a project manager, powerful editor, integrated spellchecker and thesaurus, built-in DVI viewer, descriptive hints for LaTeX commands, code complete mechanism, word wrapping, code folding, multilingual environment, and more. Nevertheless, LaTeX Editor is a small program. System requirements LEd works on Windows® 95/98/Me/NT4/2000/XP/2003 operating systems. LEd's capabilities vary according to the operating system used, e.g., Visual Styles from Windows® XP. It, however, works with almost full functionality also on Windows® 95. The recommended system configuration is: 333 MHz processor, 64 MB RAM, 4 MB of disk space for the standard edition + space for projects (space occupation depends on the spellchecking and thesaurus dictionaries), Windows 2000/XP/2003 operating system. Compatible TeX distributions LEd can be used with any TeX distribution, however its full functionality is available with a distribution based on TeXLive or MiKTeX. LEd has been tested to work correctly with: TeXLive 6, TeXLive 7, TeXLive 2003, TeXLive 2004, TeXLive 2005, MiKTeX 2.4, MiKTeX 2.5. License LaTeX Editor (LEd) and all add-ons published at http://www.latexeditor.org are a copyrighted software. All rights are retained by LEd's authors: Sebastian Deorowicz and Adam Skórczyński or authors of packages distributed together with LEd. LaTeX Editor is free software. You can redistribute it and/or modify it, but it should always carry a notice that it is a version of LaTeX Editor. See also Comparison of TeX editors External links TeX editors Proprietary software Windows-only freeware 2005 software
21401843
https://en.wikipedia.org/wiki/Freedom%20of%20speech
Freedom of speech
Freedom of speech is a principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or legal sanction. The right to freedom of expression has been recognised as a human right in the Universal Declaration of Human Rights and international human rights law by the United Nations. Many countries have constitutional law that protects free speech. Terms like free speech, freedom of speech, and freedom of expression are used interchangeably in political discourse. However, in a legal sense, the freedom of expression includes any activity of seeking, receiving, and imparting information or ideas, regardless of the medium used. Article 19 of the UDHR states that "everyone shall have the right to hold opinions without interference" and "everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive, and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice." The version of Article 19 in the ICCPR later amends this by stating that the exercise of these rights carries "special duties and responsibilities" and may "therefore be subject to certain restrictions" when necessary "[f]or respect of the rights or reputation of others" or "[f]or the protection of national security or of public order (order public), or of public health or morals." Freedom of speech and expression, therefore, may not be recognized as being absolute, and common limitations or boundaries to freedom of speech relate to libel, slander, obscenity, pornography, sedition, incitement, fighting words, classified information, copyright violation, trade secrets, food labeling, non-disclosure agreements, the right to privacy, dignity, the right to be forgotten, public security, and perjury. Justifications for such include the harm principle, proposed by John Stuart Mill in On Liberty, which suggests that "the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others." The idea of the "offense principle" is also used to justify speech limitations, describing the restriction on forms of expression deemed offensive to society, considering factors such as extent, duration, motives of the speaker, and ease with which it could be avoided. With the evolution of the digital age, application of freedom of speech becomes more controversial as new means of communication and restrictions arise, for example, the Golden Shield Project, an initiative by Chinese government's Ministry of Public Security that filters potentially unfavourable data from foreign countries. Origins and history Freedom of speech and expression has a long history that predates modern international human rights instruments. It is thought that the ancient Athenian democratic principle of free speech may have emerged in the late 6th or early 5th century BC. The values of the Roman Republic included freedom of speech and freedom of religion. Freedom of speech was vindicated by Erasmus and Milton. Edward Coke claimed freedom of speech as "an ancient custom of Parliament" in the 1590s, and it was affirmed in the Protestation of 1621. England's Bill of Rights 1689 legally established the constitutional right of freedom of speech in Parliament which is still in effect, so-called parliamentary privilege. One of the world's first freedom of the press acts was introduced in Sweden in 1766, mainly due to the classical liberal member of parliament and Ostrobothnian priest Anders Chydenius. Excepted and liable to prosecution was only vocal opposition to the King and the Church of Sweden. The Declaration of the Rights of Man and of the Citizen, adopted during the French Revolution in 1789, specifically affirmed freedom of speech as an inalienable right. Adopted in 1791, freedom of speech is a feature of the First Amendment to the United States Constitution. The French Declaration provides for freedom of expression in Article 11, which states that: Article 19 of the Universal Declaration of Human Rights, adopted in 1948, states that: Today, freedom of speech, or the freedom of expression, is recognised in international and regional human rights law. The right is enshrined in Article 19 of the International Covenant on Civil and Political Rights, Article 10 of the European Convention on Human Rights, Article 13 of the American Convention on Human Rights and Article 9 of the African Charter on Human and Peoples' Rights. Based on John Milton's arguments, freedom of speech is understood as a multi-faceted right that includes not only the right to express, or disseminate, information and ideas but three further distinct aspects: the right to seek information and ideas; the right to receive information and ideas; the right to impart information and ideas International, regional and national standards also recognise that freedom of speech, as the freedom of expression, includes any medium, whether orally, in writing, in print, through the internet or art forms. This means that the protection of freedom of speech as a right includes the content and the means of expression. Relationship to other rights The right to freedom of speech and expression is closely related to other rights. It may be limited when conflicting with other rights (see limitations on freedom of speech). The right to freedom of expression is also related to the right to a fair trial and court proceeding which may limit access to the search for information, or determine the opportunity and means in which freedom of expression is manifested within court proceedings. As a general principle freedom of expression may not limit the right to privacy, as well as the honor and reputation of others. However, greater latitude is given when criticism of public figures is involved. The right to freedom of expression is particularly important for media, which plays a special role as the bearer of the general right to freedom of expression for all. However, freedom of the press does not necessarily enable freedom of speech. Judith Lichtenberg has outlined conditions in which freedom of the press may constrain freedom of speech. For example, if all the people who control the various mediums of publication suppress information or stifle the diversity of voices inherent in freedom of speech. This limitation was famously summarised as "Freedom of the press is guaranteed only to those who own one". Lichtenberg argues that freedom of the press is simply a form of property right summed up by the principle "no money, no voice." As a negative right Freedom of speech is usually seen as a negative right. This means that the government is legally obliged to take no action against the speaker based on the speaker's views, but that no one is obliged to help any speakers publish their views, and no one is required to listen to, agree with, or acknowledge the speaker or the speaker's views. Democracy in relation to social interaction Freedom of speech is understood to be fundamental in a democracy. The norms on limiting freedom of expression mean that public debate may not be completely suppressed even in times of emergency. One of the most notable proponents of the link between freedom of speech and democracy is Alexander Meiklejohn. He has argued that the concept of democracy is that of self-government by the people. For such a system to work, an informed electorate is necessary. In order to be appropriately knowledgeable, there must be no constraints on the free flow of information and ideas. According to Meiklejohn, democracy will not be true to its essential ideal if those in power can manipulate the electorate by withholding information and stifling criticism. Meiklejohn acknowledges that the desire to manipulate opinion can stem from the motive of seeking to benefit society. However, he argues, choosing manipulation negates, in its means, the democratic ideal. Eric Barendt has called this defence of free speech on the grounds of democracy "probably the most attractive and certainly the most fashionable free speech theory in modern Western democracies." Thomas I. Emerson expanded on this defence when he argued that freedom of speech helps to provide a balance between stability and change. Freedom of speech acts as a "safety valve" to let off steam when people might otherwise be bent on revolution. He argues that "The principle of open discussion is a method of achieving a more adaptable and at the same time more stable community, of maintaining the precarious balance between healthy cleavage and necessary consensus." Emerson furthermore maintains that "Opposition serves a vital social function in offsetting or ameliorating (the) normal process of bureaucratic decay." Research undertaken by the Worldwide Governance Indicators project at the World Bank, indicates that freedom of speech, and the process of accountability that follows it, have a significant impact on the quality of governance of a country. "Voice and Accountability" within a country, defined as "the extent to which a country's citizens are able to participate in selecting their government, as well as freedom of expression, freedom of association, and free media" is one of the six dimensions of governance that the Worldwide Governance Indicators measure for more than 200 countries. Against this backdrop it is important that development agencies create grounds for effective support for a free press in developing countries. Richard Moon has developed the argument that the value of freedom of speech and freedom of expression lies with social interactions. Moon writes that "by communicating an individual forms relationships and associations with others – family, friends, co-workers, church congregation, and countrymen. By entering into discussion with others an individual participates in the development of knowledge and in the direction of the community." Limitations Freedom of speech is not regarded as absolute by some, with most legal systems generally setting limits on the freedom of speech, particularly when freedom of speech conflicts with other rights and protections, such as in the cases of libel, slander, pornography, obscenity, fighting words, and intellectual property. Some limitations to freedom of speech may occur through legal sanction, and others may occur through social disapprobation. Harmful and offensive content Some views are illegal to express because they can cause harm to others. This category often includes speech that is both false and dangerous, such as falsely shouting "Fire!" in a theatre and causing a panic. Justifications for limitations to freedom of speech often reference the "harm principle" or the "offence principle." In On Liberty (1859), John Stuart Mill argued that "...there ought to exist the fullest liberty of professing and discussing, as a matter of ethical conviction, any doctrine, however immoral it may be considered." Mill argues that the fullest liberty of expression is required to push arguments to their logical limits, rather than the limits of social embarrassment. In 1985, Joel Feinberg introduced what is known as the "offence principle". Feinberg wrote, "It is always a good reason in support of a proposed criminal prohibition that it would probably be an effective way of preventing serious offence (as opposed to injury or harm) to persons other than the actor, and that it is probably a necessary means to that end." Hence Feinberg argues that the harm principle sets the bar too high and that some forms of expression can be legitimately prohibited by law because they are very offensive. Nevertheless, as offending someone is less serious than harming someone, the penalties imposed should be higher for causing harm. In contrast, Mill does not support legal penalties unless they are based on the harm principle. Because the degree to which people may take offence varies, or may be the result of unjustified prejudice, Feinberg suggests that several factors need to be taken into account when applying the offence principle, including: the extent, duration and social value of the speech, the ease with which it can be avoided, the motives of the speaker, the number of people offended, the intensity of the offence, and the general interest of the community at large. Jasper Doomen argued that harm should be defined from the point of view of the individual citizen, not limiting harm to physical harm since nonphysical harm may also be involved; Feinberg's distinction between harm and offence is criticized as largely trivial. In 1999, Bernard Harcourt wrote of the collapse of the harm principle: "Today the debate is characterized by a cacophony of competing harm arguments without any way to resolve them. There is no longer an argument within the structure of the debate to resolve the competing claims of harm. The original harm principle was never equipped to determine the relative importance of harms." Interpretations of both the harm and offense limitations to freedom of speech are culturally and politically relative. For instance, in Russia, the harm and offense principles have been used to justify the Russian LGBT propaganda law restricting speech (and action) concerning LGBT issues. Many European countries that take pride in freedom of speech nevertheless outlaw speech that might be interpreted as Holocaust denial. These include Austria, Belgium, Canada, the Czech Republic, France, Germany, Hungary, Israel, Liechtenstein, Lithuania, Luxembourg, Netherlands, Poland, Portugal, Russia, Slovakia, Switzerland and Romania. Armenian genocide denial is also illegal in some countries. In some countries, blasphemy is a crime. For example, in Austria, defaming Muhammad, the prophet of Islam, is not protected as free speech. In contrast, in France, blasphemy and disparagement of Muhammad are protected under free speech law. Certain public institutions may also enact policies restricting the freedom of speech, for example, speech codes at state-operated schools. In the U.S., the standing landmark opinion on political speech is Brandenburg v. Ohio (1969), expressly overruling Whitney v. California. In Brandenburg, the U.S. Supreme Court referred to the right even to speak openly of violent action and revolution in broad terms: The opinion in Brandenburg discarded the previous test of "clear and present danger" and made the right to freedom of (political) speech protections in the United States almost absolute. Hate speech is also protected by the First Amendment in the United States, as decided in R.A.V. v. City of St. Paul, (1992) in which the Supreme Court ruled that hate speech is permissible, except in the case of imminent violence. See the First Amendment to the United States Constitution for more detailed information on this decision and its historical background. Time, place, and manner Limitations based on time, place, and manner apply to all speech, regardless of the view expressed. They are generally restrictions that are intended to balance other rights or a legitimate government interest. For example, a time, place, and manner restriction might prohibit a noisy political demonstration at a politician's home during the middle of the night, as that impinges upon the rights of the politician's neighbors to quiet enjoyment of their own homes. An otherwise identical activity might be permitted if it happened at a different time (e.g., during the day), at a different place (e.g., at a government building or in another public forum), or in a different manner (e.g., a silent protest). The Internet and information society Jo Glanville, editor of the Index on Censorship, states that "the Internet has been a revolution for censorship as much as for free speech". International, national and regional standards recognise that freedom of speech, as one form of freedom of expression, applies to any medium, including the Internet. The Communications Decency Act (CDA) of 1996 was the first major attempt by the United States Congress to regulate pornographic material on the Internet. In 1997, in the landmark cyberlaw case of Reno v. ACLU, the US Supreme Court partially overturned the law. Judge Stewart R. Dalzell, one of the three federal judges who in June 1996 declared parts of the CDA unconstitutional, in his opinion stated the following: The World Summit on the Information Society (WSIS) Declaration of Principles adopted in 2003 makes specific reference to the importance of the right to freedom of expression for the "Information Society" in stating: According to Bernt Hugenholtz and Lucie Guibault, the public domain is under pressure from the "commodification of information" as information with previously little or no economic value has acquired independent economic value in the information age. This includes factual data, personal data, genetic information and pure ideas. The commodification of information is taking place through intellectual property law, contract law, as well as broadcasting and telecommunications law. Freedom of information Freedom of information is an extension of freedom of speech where the medium of expression is the Internet. Freedom of information may also refer to the right to privacy in the context of the Internet and information technology. As with the right to freedom of expression, the right to privacy is a recognised human right and freedom of information acts as an extension to this right. Freedom of information may also concern censorship in an information technology context, i.e. the ability to access Web content, without censorship or restrictions. Freedom of information is also explicitly protected by acts such as the Freedom of Information and Protection of Privacy Act of Ontario, in Canada. The Access to Information Act gives Canadian citizens, permanent residents, and any person or corporation present in Canada a right to access records of government institutions that are subject to the Act. Internet censorship The concept of freedom of information has emerged in response to state sponsored censorship, monitoring and surveillance of the internet. Internet censorship includes the control or suppression of the publishing or accessing of information on the Internet. The Global Internet Freedom Consortium claims to remove blocks to the "free flow of information" for what they term "closed societies." According to the Reporters without Borders (RWB) "internet enemy list" the following states engage in pervasive internet censorship: Mainland China, Cuba, Iran, Myanmar/Burma, North Korea, Saudi Arabia, Syria, Turkmenistan, Uzbekistan, and Vietnam. A widely publicized example of internet censorship is the "Great Firewall of China" (in reference both to its role as a network firewall and the ancient Great Wall of China). The system blocks content by preventing IP addresses from being routed through and consists of standard firewall and proxy servers at the internet gateways. The system also selectively engages in DNS poisoning when particular sites are requested. The government does not appear to be systematically examining Internet content, as this appears to be technically impractical. Internet censorship in the People's Republic of China is conducted under a wide variety of laws and administrative regulations, including more than sixty regulations directed at the Internet. Censorship systems are vigorously implemented by provincial branches of state-owned ISPs, business companies, and organizations. Challenge of disinformation Some legal scholars (such as Tim Wu of Columbia University) have argued that the traditional issues of free speech—that "the main threat to free speech" is the censorship of "suppressive states," and that "ill-informed or malevolent speech" can and should be overcome by "more and better speech" rather than censorship—assumes scarcity of information. This scarcity prevailed during the 20th century, but with the arrival of the internet, information became plentiful, "but the attention of listeners" scarce. Furthermore, in the words of Wu, this "cheap speech" made possible by the internet " ... may be used to attack, harass, and silence as much as it is used to illuminate or debate." In the 21st century, the danger is not "suppressive states" that target "speakers directly", but that History of dissent and truth Before the invention of the printing press, a written work, once created, could only be physically multiplied by highly laborious and error-prone manual copying. No elaborate system of censorship and control over scribes existed, who until the 14th century were restricted to religious institutions, and their works rarely caused wider controversy. In response to the printing press, and the theological heresies it allowed to spread, the Roman Catholic Church moved to impose censorship. Printing allowed for multiple exact copies of a work, leading to a more rapid and widespread circulation of ideas and information (see print culture). The origins of copyright law in most European countries lie in efforts by the Roman Catholic Church and governments to regulate and control the output of printers. In 1501, Pope Alexander VI issued a Bill against the unlicensed printing of books. In 1559, Pope Paul IV promulgated the Index Expurgatorius, or List of Prohibited Books. The Index Expurgatorius is the most famous and long-lasting example of "bad books" catalogues issued by the Roman Catholic Church, which presumed to be in authority over private thoughts and opinions, and suppressed views that went against its doctrines. The Index Expurgatorius was administered by the Roman Inquisition, but enforced by local government authorities, and went through 300 editions. Amongst others, it banned or censored books written by René Descartes, Giordano Bruno, Galileo Galilei, David Hume, John Locke, Daniel Defoe, Jean-Jacques Rousseau and Voltaire. While governments and church encouraged printing in many ways because it allowed for the dissemination of Bibles and government information, works of dissent and criticism could also circulate rapidly. Consequently, governments established controls over printers across Europe, requiring them to have official licenses to trade and produce books. The notion that the expression of dissent or subversive views should be tolerated, not censured or punished by law, developed alongside the rise of printing and the press. Areopagitica, published in 1644, was John Milton's response to the Parliament of England's re-introduction of government licensing of printers, hence publishers. Church authorities had previously ensured that Milton's essay on the right to divorce was refused a license for publication. In Areopagitica, published without a license, Milton made an impassioned plea for freedom of expression and toleration of falsehood, stating: Milton's defense of freedom of expression was grounded in a Protestant worldview. He thought that the English people had the mission to work out the truth of the Reformation, which would lead to the enlightenment of all people. Nevertheless, Milton also articulated the main strands of future discussions about freedom of expression. By defining the scope of freedom of expression and "harmful" speech, Milton argued against the principle of pre-censorship and in favor of tolerance for a wide range of views. Freedom of the press ceased being regulated in England in 1695 when the Licensing Order of 1643 was allowed to expire after the introduction of the Bill of Rights 1689 shortly after the Glorious Revolution. The emergence of publications like the Tatler (1709) and the Spectator (1711) are given credit for creating a 'bourgeois public sphere' in England that allowed for a free exchange of ideas and information. More governments attempted to centralize control as the "menace" of printing spread. The French crown repressed printing and the printer Etienne Dolet was burned at the stake in 1546. In 1557 the British Crown thought to stem the flow of seditious and heretical books by chartering the Stationers' Company. The right to print was limited to the members of that guild. Thirty years later, the Star Chamber was chartered to curtail the "greate enormities and abuses" of "dyvers contentyous and disorderlye persons professinge the arte or mystere of pryntinge or selling of books." The right to print was restricted to two universities and the 21 existing printers in the city of London, which had 53 printing presses. As the British crown took control of type founding in 1637, printers fled to the Netherlands. Confrontation with authority made printers radical and rebellious, with 800 authors, printers, and book dealers being incarcerated in the Bastille in Paris before it was stormed in 1789. A succession of English thinkers was at the forefront of early discussion on a right to freedom of expression, among them John Milton (1608–74) and John Locke (1632–1704). Locke established the individual as the unit of value and the bearer of rights to life, liberty, property and the pursuit of happiness. However, Locke's ideas evolved primarily around the concept of the right to seek salvation for one's soul. He was thus primarily concerned with theological matters. Locke neither supported a universal toleration of peoples nor freedom of speech; according to his ideas, some groups, such as atheists, should not be allowed. By the second half of the 17th century philosophers on the European continent like Baruch Spinoza and Pierre Bayle developed ideas encompassing a more universal aspect freedom of speech and toleration than the early English philosophers. By the 18th century the idea of freedom of speech was being discussed by thinkers all over the Western world, especially by French philosophes like Denis Diderot, Baron d'Holbach and Claude Adrien Helvétius. The idea began to be incorporated in political theory both in theory as well as practice; the first state edict in history proclaiming complete freedom of speech was the one issued 4 December 1770 in Denmark-Norway during the regency of Johann Friedrich Struensee. However Struensee himself imposed some minor limitations to this edict on 7 October 1771, and it was even further limited after the fall of Struensee with legislation introduced in 1773, although censorship was not reintroduced. John Stuart Mill (1806–1873) argued that without human freedom, there could be no progress in science, law, or politics, which according to Mill, required free discussion of opinion. Mill's On Liberty, published in 1859, became a classic defence of the right to freedom of expression. Mill argued that truth drives out falsity, therefore the free expression of ideas, true or false, should not be feared. Truth is not stable or fixed but evolves with time. Mill argued that much of what we once considered true has turned out false. Therefore, views should not be prohibited for their apparent falsity. Mill also argued that free discussion is necessary to prevent the "deep slumber of a decided opinion". Discussion would drive the march of truth, and by considering false views, the basis of true views could be re-affirmed. Furthermore, Mill argued that an opinion only carries intrinsic value to the owner of that opinion, thus silencing the expression of that opinion is an injustice to a basic human right. For Mill, the only instance in which speech can be justifiably suppressed is to prevent harm from a clear and direct threat. Neither economic or moral implications nor the speaker's own well-being would justify suppression of speech. In her 1906 biography of Voltaire, Evelyn Beatrice Hall coined the following sentence to illustrate Voltaire's beliefs: "I disapprove of what you say, but I will defend to the death your right to say it." Hall's quote is frequently cited to describe the principle of freedom of speech. Noam Chomsky stated, "If you believe in freedom of speech, you believe in freedom of speech for views you don't like. Dictators such as Stalin and Hitler, were in favor of freedom of speech for views they liked only. If you're in favor of freedom of speech, that means you're in favor of freedom of speech precisely for views you despise." Lee Bollinger argues that "the free speech principle involves a special act of carving out one area of social interaction for extraordinary self-restraint, the purpose of which is to develop and demonstrate a social capacity to control feelings evoked by a host of social encounters." Bollinger argues that tolerance is a desirable value, if not essential. However, critics argue that society should be concerned by those who directly deny or advocate, for example, genocide (see limitations above). As chairman of the London-based PEN International, a club which defends freedom of expression and a free press, English author H. G. Wells met with Stalin in 1934 and was hopeful of reform in the Soviet Union. However, during their meeting in Moscow, Wells said, "the free expression of opinion—even of opposition opinion, I do not know if you are prepared yet for that much freedom here." The 1928 novel Lady Chatterley's Lover by D. H. Lawrence was banned for obscenity in several countries, including the United Kingdom, the United States, Australia, Canada, and India. In the late 1950s and early 1960s, it was the subject of landmark court rulings that saw the ban for obscenity overturned. Dominic Sandbrook of The Telegraph in the UK wrote, "Now that public obscenity has become commonplace, it is hard to recapture the atmosphere of a society that saw fit to ban books such as Lady Chatterley's Lover because it was likely to 'deprave and corrupt' its readers." Fred Kaplan of The New York Times stated the overturning of the obscenity laws "set off an explosion of free speech" in the U.S. The 1960s also saw the Free Speech Movement, a massive long-lasting student protest on the campus of the University of California, Berkeley during the 1964–65 academic year. In contrast to Anglophone nations, France was a haven for literary freedom. The innate French regard for the mind meant that France was disinclined to punish literary figures for their writing, and prosecutions were rare. While it was prohibited everywhere else, James Joyce's Ulysses was published in Paris in 1922. Henry Miller's 1934 novel Tropic of Cancer (banned in the U.S. until 1963) and Lawrence's Lady Chatterley's Lover were published in France decades before they were available in the respective authors' home countries. In 1964 comedian Lenny Bruce was arrested in the U.S. due to complaints again about his use of various obscenities. A three-judge panel presided over his widely publicized six-month trial. He was found guilty of obscenity in November 1964. He was sentenced on 21 December 1964, to four months in a workhouse. He was set free on bail during the appeals process and died before the appeal was decided. On 23 December 2003, thirty-seven years after Bruce's death, New York Governor George Pataki granted him a posthumous pardon for his obscenity conviction. In the United States, the right to freedom of expression has been interpreted to include the right to take and publish photographs of strangers in public areas without their permission or knowledge. This is not the case worldwide. Offences In some countries, people are not allowed to talk about certain things. Doing so constitutes an offence. For example, Saudi Arabia is responsible for executing journalist Jamal Khashoggi in 2018. As he entered the Saudi embassy in Turkey, a team of Saudi assassins killed him. Another Saudi writer, Raif Badawi, was arrested in 2012 and lashed. Freedom of speech on college campuses In July 2014, the University of Chicago released the "Chicago Statement," a free speech policy statement designed to combat censorship on campus. This statement was later adopted by a number of top-ranked universities including Princeton University, Washington University in St. Louis, Johns Hopkins University, and Columbia University. Commentators such as Vox'''s Zack Beauchamp and Chris Quintana, writing in The Chronicle of Higher Education, have disputed the assumption that college campuses are facing a "free-speech crisis." See also Academic freedom Artistic freedom The Confessionals Cancel culture Civil and political rights Digital rights Election silence Everybody Draw Mohammed Day Forced or compelled speech Free speech fights Freedom of thought Glasnost Global Network Initiative Hate speech Heckler's veto IFEX (organization) Illegal number Je suis Charlie Jyllands-Posten Muhammad cartoons controversy Jamal Khashoggi Legality of Holocaust denial Post–World War II legality of Nazi flags Strafgesetzbuch section 86a Verbotsgesetz 1947 Bans on communist symbols Lolicon/Shotacon Legal status of drawn pornography depicting minors Simulated child pornography Market for loyalties theory Media transparency No Platform Open court principle Paradox of tolerance Parrhesia Photography Is Not a Crime Pirate Party Political correctness Rights Safety of journalists Stanley v. Georgia Symbolic speech Victimless crime References Further reading Shaw, Caroline. "Freedom of expression and the palladium of British liberties, 1650–2000: A review essay" History Compass'' (Oct 2020) online External links Article19.org, Global Campaign for Free Expression. Free Speech Debate, a research project of the Dahrendorf Programme for the Study of Freedom at St Antony's College in the University of Oxford. Index on Censorship, an international organisation that promotes and defends the right to freedom of expression. Media Freedom Navigator: Media Freedom Indices at a Glance, Deutsche Welle Akademie. Special Rapporteur for Freedom of Expression, Organization of American States. Censorship Speech Human rights
25143203
https://en.wikipedia.org/wiki/.NET%20Framework
.NET Framework
The .NET Framework (pronounced as "dot net") is a proprietary software framework developed by Microsoft that runs primarily on Microsoft Windows. It was the predominant implementation of the Common Language Infrastructure (CLI) until being superseded by the cross-platform .NET project. It includes a large class library called Framework Class Library (FCL) and provides language interoperability (each language can use code written in other languages) across several programming languages. Programs written for .NET Framework execute in a software environment (in contrast to a hardware environment) named the Common Language Runtime (CLR). The CLR is an application virtual machine that provides services such as security, memory management, and exception handling. As such, computer code written using .NET Framework is called "managed code". FCL and CLR together constitute the .NET Framework. FCL provides the user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their source code with .NET Framework and other libraries. The framework is intended to be used by most new applications created for the Windows platform. Microsoft also produces an integrated development environment for .NET software called Visual Studio. .NET Framework began as proprietary software, although the firm worked to standardize the software stack almost immediately, even before its first release. Despite the standardization efforts, developers, mainly those in the free and open-source software communities, expressed their unease with the selected terms and the prospects of any free and open-source implementation, especially regarding software patents. Since then, Microsoft has changed .NET development to more closely follow a contemporary model of a community-developed software project, including issuing an update to its patent promising to address the concerns. In April 2019, Microsoft released .NET Framework 4.8, the last version of the framework as a proprietary offering. Only monthly security and reliability bug fixes to that version have been released since then. No further changes to that version are planned. History Microsoft began developing .NET Framework in the late 1990s, originally under the name of Next Generation Windows Services (NGWS), as part of the .NET strategy. By early 2000, the first beta versions of .NET 1.0 were released. In August 2000, Microsoft, and Intel worked to standardize Common Language Infrastructure (CLI) and C#. By December 2001, both were ratified Ecma International (ECMA) standards. International Organization for Standardization (ISO) followed in April 2003. The current version of ISO standards are ISO/IEC 23271:2012 and ISO/IEC 23270:2006. While Microsoft and their partners hold patents for CLI and C#, ECMA and ISO require that all patents essential to implementation be made available under "reasonable and non-discriminatory terms". The firms agreed to meet these terms, and to make the patents available royalty-free. However, this did not apply for the part of .NET Framework not covered by ECMA-ISO standards, which included Windows Forms, ADO.NET, and ASP.NET. Patents that Microsoft holds in these areas may have deterred non-Microsoft implementations of the full framework. On October 3, 2007, Microsoft announced that the source code for .NET Framework 3.5 libraries was to become available under the Microsoft Reference Source License (Ms-RSL). The source code repository became available online on January 16, 2008 and included BCL, ASP.NET, ADO.NET, Windows Forms, WPF, and XML. Scott Guthrie of Microsoft promised that LINQ, WCF, and WF libraries were being added. The .NET Compact Framework and .NET Micro Framework variants of the .NET Framework provided support for other Microsoft platforms such as Windows Mobile, Windows CE and other resource-constrained embedded devices. Silverlight provided support for web browsers via plug-ins. In November 2014, Microsoft also produced an update to its patent grants, which further extends the scope beyond its prior pledges. Prior projects like Mono existed in a legal grey area because Microsoft's earlier grants applied only to the technology in "covered specifications", including strictly the 4th editions each of ECMA-334 and ECMA-335. The new patent promise, however, places no ceiling on the specification version, and even extends to any .NET runtime technologies documented on MSDN that have not been formally specified by the ECMA group, if a project chooses to implement them. This allows Mono and other projects to maintain feature parity with modern .NET features that have been introduced since the 4th edition was published without being at risk of patent litigation over the implementation of those features. The new grant does maintain the restriction that any implementation must maintain minimum compliance with the mandatory parts of the CLI specification. On March 31, 2016, Microsoft announced at Microsoft Build that they will completely relicense Mono under an MIT License even in scenarios where formerly a commercial license was needed. Microsoft also supplemented its prior patent promise for Mono, stating that they will not assert any "applicable patents" against parties that are "using, selling, offering for sale, importing, or distributing Mono." It was announced that the Mono Project was contributed to the .NET Foundation. These developments followed the acquisition of Xamarin, which began in February 2016 and was finished on March 18, 2016. Microsoft's press release highlights that the cross-platform commitment now allows for a fully open-source, modern server-side .NET stack. Microsoft released the source code for WPF, Windows Forms and WinUI on December 4, 2018. Architecture Common Language Infrastructure Common Language Infrastructure (CLI) provides a language-neutral platform for application development and execution. By implementing the core aspects of .NET Framework within the scope of CLI, these functions will not be tied to one language but will be available across the many languages supported by the framework. Common Language Runtime .NET Framework includes the Common Language Runtime (CLR). It serves as the execution engine of .NET Framework and offers many services such as memory management, type safety, exception handling, garbage collection, security and thread management. All programs written for .NET Framework are executed by the CLR. Programs written for .NET Framework are compiled into Common Intermediate Language code (CIL), as opposed to being directly compiled into machine code. During execution, an architecture-specific just-in-time compiler (JIT) turns the CIL code into machine code. Assemblies Compiled CIL code is stored in CLI assemblies. As mandated by the specification, assemblies are stored in Portable Executable (PE) file format, common on Windows platform for all dynamic-link library (DLL) and executable EXE files. Each assembly consists of one or more files, one of which must contain a manifest bearing the metadata for the assembly. The complete name of an assembly (not to be confused with the file name on disk) contains its simple text name, version number, culture, and public key token. Assemblies are considered equivalent if they share the same complete name. A private key can also be used by the creator of the assembly for strong naming. The public key token identifies which private key an assembly is signed with. Only the creator of the key pair (typically the person signing the assembly) can sign assemblies that have the same strong name as a prior version assembly, since the creator possesses the private key. Strong naming is required to add assemblies to Global Assembly Cache. Starting with Visual Studio 2015, .NET Native compilation technology allows for the compilation of .NET code of Universal Windows Platform apps directly to machine code rather than CIL code, but the app must be written in either C# or Visual Basic.NET. Class library .NET Framework includes an implementation of the CLI foundational Standard Libraries. The .NET Framework Class Library (FCL) is organized in a hierarchy of namespaces. Most of the built-in application programming interfaces (APIs) are part of either System.* or Microsoft.* namespaces. These class libraries implement many common functions, such as file reading and writing, graphic rendering, database interaction, and XML document manipulation. The class libraries are available for all CLI compliant languages. The FCL implements the CLI Base Class Library (BCL) and other class libraries—some are specified by CLI and other are Microsoft specific. BCL includes a small subset of the entire class library and is the core set of classes that serve as the basic API of CLR. For .NET Framework most classes considered being part of BCL reside in mscorlib.dll, System.dll and System.Core.dll. BCL classes are available in .NET Framework as well as its alternative implementations including .NET Compact Framework, Microsoft Silverlight, .NET Core and Mono. FCL refers to the entire class library that ships with .NET Framework. It includes an expanded set of libraries, including BCL, Windows Forms, ASP.NET, and Windows Presentation Foundation (WPF) but also extensions to the base class libraries ADO.NET, Language Integrated Query (LINQ), Windows Communication Foundation (WCF), and Workflow Foundation (WF). FCL is much larger in scope than standard libraries for languages like C++, and comparable in scope to standard libraries of Java. With the introduction of alternative implementations (e.g., Silverlight), Microsoft introduced the concept of Portable Class Libraries (PCL) allowing a consuming library to run on more than one platform. With the further proliferation of .NET platforms, the PCL approach failed to scale (PCLs are defined intersections of API surface between two or more platforms). As the next evolutionary step of PCL, the .NET Standard Library was created retroactively based on the System.Runtime.dll based APIs found in UWP and Silverlight. New .NET platforms are encouraged to implement a version of the standard library allowing them to re-use extant third-party libraries to run without new versions of them. The .NET Standard Library allows an independent evolution of the library and app model layers within the .NET architecture. NuGet is the package manager for all .NET platforms. It is used to retrieve third-party libraries into a .NET project with a global library feed at NuGet.org. Private feeds can be maintained separately, e.g., by a build server or a file system directory. C++/CLI Microsoft introduced C++/CLI in Visual Studio 2005, which is a language and means of compiling Visual C++ programs to run within the .NET Framework. Some parts of the C++ program still run within an unmanaged Visual C++ Runtime, while specially modified parts are translated into CIL code and run with the .NET Framework's CLR. Assemblies compiled using the C++/CLI compiler are termed mixed-mode assemblies, since they contain native and managed code in the same DLL. Such assemblies are more complex to reverse engineer, since .NET decompilers such as .NET Reflector reveal only the managed code. Design principle Interoperability Because computer systems commonly require interaction between newer and older applications, .NET Framework provides means to access functions implemented in newer and older programs that execute outside .NET environment. Access to Component Object Model (COM) components is provided in System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework. Access to other functions is via Platform Invocation Services (P/Invoke). Access to .NET functions from native applications is via reverse P/Invoke function. Language independence .NET Framework introduces a Common Type System (CTS) that defines all possible data types and programming constructs supported by CLR and how they may or may not interact conforming to CLI specification. Because of this feature, .NET Framework supports the exchange of types and object instances between libraries and applications written using any conforming .NET language. Type safety CTS and the CLR used in .NET Framework also enforce type safety. This prevents ill-defined casts, wrong method invocations, and memory size issues when accessing an object. This also makes most CLI languages statically typed (with or without type inference). However, starting with .NET Framework 4.0, the Dynamic Language Runtime extended the CLR, allowing dynamically typed languages to be implemented atop the CLI. Portability While Microsoft has never implemented the full framework on any system except Microsoft Windows, it has engineered the framework to be cross-platform, and implementations are available for other operating systems (see Silverlight and § Alternative implementations). Microsoft submitted the specifications for CLI (which includes the core class libraries, CTS, and CIL), C#, and C++/CLI to both Ecma International (ECMA) and International Organization for Standardization (ISO), making them available as official standards. This makes it possible for third parties to create compatible implementations of the framework and its languages on other platforms. Security .NET Framework has its own security mechanism with two general features: Code Access Security (CAS), and validation and verification. CAS is based on evidence that is associated with a specific assembly. Typically the evidence is the source of the assembly (whether it is installed on the local machine or has been downloaded from the Internet). CAS uses evidence to determine the permissions granted to the code. Other code can demand that calling code be granted a specified permission. The demand causes CLR to perform a call stack walk: every assembly of each method in the call stack is checked for the required permission; if any assembly is not granted the permission a security exception is thrown. Managed CIL bytecode is easier to reverse-engineer than native code, unless obfuscated. decompiler programs enable developers with no reverse-engineering skills to view the source code behind unobfuscated .NET assemblies. In contrast, apps compiled to native machine code are much harder to reverse-engineer, and source code is almost never produced successfully, mainly because of compiler optimizations and lack of reflection. This creates concerns in the business community over the possible loss of trade secrets and the bypassing of license control mechanisms. To mitigate this, Microsoft has included Dotfuscator Community Edition with Visual Studio .NET since 2002. Third-party obfuscation tools are also available from vendors such as VMware, V.i. Labs, Turbo, and Red Gate Software. Method-level encryption tools for .NET code are available from vendors such as SafeNet. Memory management CLR frees the developer from the burden of managing memory (allocating and freeing up when done); it handles memory management itself by detecting when memory can be safely freed. Instantiations of .NET types (objects) are allocated from the managed heap; a pool of memory managed by CLR. As long as a reference to an object exists, which may be either direct, or via a graph of objects, the object is considered to be in use. When no reference to an object exists, and it cannot be reached or used, it becomes garbage, eligible for collection. .NET Framework includes a garbage collector (GC) which runs periodically, on a separate thread from the application's thread, that enumerates all the unusable objects and reclaims the memory allocated to them. It is a non-deterministic, compacting, mark-and-sweep garbage collector. GC runs only when a set amount of memory has been used or there is enough pressure for memory on the system. Since it is not guaranteed when the conditions to reclaim memory are reached, GC runs are non-deterministic. Each .NET application has a set of roots, which are pointers to objects on the managed heap (managed objects). These include references to static objects, objects defined as local variables or method parameters currently in scope, and objects referred to by CPU registers. When GC runs, it pauses the application and then, for each object referred to in the root, it recursively enumerates all the objects reachable from the root objects and marks them as reachable. It uses CLI metadata and reflection to discover the objects encapsulated by an object, and then recursively walk them. It then enumerates all the objects on the heap (which were initially allocated contiguously) using reflection. All objects not marked as reachable are garbage. This is the mark phase. Since the memory held by garbage is of no consequence, it is considered free space. However, this leaves chunks of free space between objects which were initially contiguous. The objects are then compacted together to make free space on the managed heap contiguous again. Any reference to an object invalidated by moving the object is updated by GC to reflect the new location. The application is resumed after garbage collection ends. The latest version of .NET framework uses concurrent garbage collection along with user code, making pauses unnoticeable, because it is done in the background. The garbage collector used by .NET Framework is also generational. Objects are assigned a generation. Newly created objects are tagged Generation 0. Objects that survive one garbage collection are tagged Generation 1. Generation 1 objects that survive another collection are Generation 2. The framework uses up to Generation 2 objects. Higher generation objects are garbage collected less often than lower generation objects. This raises the efficiency of garbage collection, as older objects tend to have longer lifetimes than newer objects. By ignoring older objects in most collection runs, fewer checks and compaction operations are needed in total. Performance When an application is first launched, the .NET Framework compiles the CIL code into executable code using its just-in-time compiler, and caches the executable program into the .NET Native Image Cache. Due to caching, the application launches faster for subsequent launches, although the first launch is usually slower. To speed up the first launch, developers may use the Native Image Generator utility to manually ahead-of-time compile and cache any .NET application. The garbage collector, which is integrated into the environment, can introduce unanticipated delays of execution over which the developer has little direct control. "In large applications, the number of objects that the garbage collector needs to work with can become very large, which means it can take a very long time to visit and rearrange all of them." .NET Framework provides support for calling Streaming SIMD Extensions (SSE) via managed code from April 2014 in Visual Studio 2013 Update 2. However, Mono has provided support for SIMD Extensions as of version 2.2 within the namespace in 2009. Mono's lead developer Miguel de Icaza has expressed hope that this SIMD support will be adopted by CLR's ECMA standard. Streaming SIMD Extensions have been available in x86 CPUs since the introduction of the Pentium III. Some other architectures such as ARM and MIPS also have SIMD extensions. In case the CPU lacks support for those extensions, the instructions are simulated in software. Alternative implementations .NET Framework was the predominant implementation of .NET technologies, until the release of .NET. Other implementations for parts of the framework exist. Although the runtime engine is described by an ECMA-ISO specification, other implementations of it may be encumbered by patent issues; ISO standards may include the disclaimer, "Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO shall not be held responsible for identifying any or all such patent rights." It is harder to develop alternatives to FCL, which is not described by an open standard and may be subject to copyright restrictions. Also, parts of FCL have Windows-specific functions and behavior, so implementation on non-Windows platforms can be problematic. Some alternative implementations of parts of the framework are listed here. .NET Micro Framework is a .NET platform for extremely resource-constrained devices. It includes a small version of CLR and supports development in C# (though some developers were able to use VB.NET, albeit with an amount of hacking, and with limited functionalities) and debugging (in an emulator or on hardware), both using Microsoft Visual Studio. It also features a subset of .NET Framework Class Library (about 70 classes with about 420 methods), a GUI framework loosely based on WPF, and additional libraries specific to embedded applications. Mono is an implementation of CLI and FCL, and provides added functions. It is licensed as free software under the MIT License. It includes support for ASP.NET, ADO.NET, and Windows Forms libraries for a wide range of architectures and operating systems. It also includes C# and VB.NET compilers. Portable.NET (part of DotGNU) provides an implementation of CLI, parts of FCL, and a C# compiler. It supports a variety of CPUs and operating systems. The project was discontinued, with the last stable release in 2009. Microsoft Shared Source Common Language Infrastructure is a non-free implementation of CLR. However, the last version runs on Windows XP SP2 only, and has not been updated since 2006. Thus, it does not contain all features of version 2.0 of .NET Framework. CrossNet is an implementation of CLI and parts of FCL. It is free software using an open source MIT License. Licensing Microsoft managed code frameworks and their components are licensed as follows: See also .NET (formerly .NET Core) List of CLI languages Standard Libraries (CLI), the .NET standard libraries Base Class Library (BCL) Notes References External links Overview of .NET Framework (MSDN) .NET Github repository .NET Framework implementations 2002 software Cross-platform software Microsoft application programming interfaces Microsoft development tools Microsoft free software
83220
https://en.wikipedia.org/wiki/NCR%20Corporation
NCR Corporation
NCR Corporation, previously known as National Cash Register, is an American software, managed and professional services, consulting and technology company. It manufactures self-service kiosks, point-of-sale terminals, automated teller machines, check processing systems, and barcode scanners. NCR was founded in Dayton, Ohio, in 1884 and acquired by AT&T in 1991. A restructuring of AT&T in 1996 led to NCR's re-establishment on January 1, 1997, as a separate company and involved the spin-off of Lucent Technologies from AT&T. In June 2009 the company sold most of the Dayton properties and moved its headquarters to the Atlanta metropolitan area in unincorporated Gwinnett County, Georgia, near Duluth. In early January 2018, the new NCR Global Headquarters opened in Midtown Atlanta near Technology Square (adjacent to the Georgia Institute of Technology). History Early years The company began as the National Manufacturing Company of Dayton, Ohio, and was established to manufacture and sell the first mechanical cash register invented in 1879 by James Ritty. In 1884, the company and patents were bought by John Henry Patterson and his brother Frank Jefferson Patterson, and the firm was renamed the National Cash Register Company. Patterson formed NCR into one of the first modern American companies by introducing new, aggressive sales methods and business techniques. He established the first sales training school in 1893 and introduced a comprehensive social welfare program for his factory workers. Other significant figures in the early history of the company were Thomas J. Watson, Sr., Charles F. Kettering and Edward A. Deeds. Watson—later fired by Patterson in 1914—eventually worked his way up to general sales manager. At an uninspiring sales meeting, Watson interrupted, saying "The trouble with every one of us is that we don't think enough. We don't get paid for working with our feet — we get paid for working with our heads". Watson then wrote THINK on the easel. Signs with this motto were later erected in NCR factory buildings, sales offices and club rooms during the mid-1890s. "THINK" later became a widely known symbol of IBM, which was created by Watson after he joined the Computing-Tabulating-Recording Company (CTR). Kettering designed the first cash register powered by an electric motor in 1906. Within a few years he developed the Class 1000 register which was in production for 40 years, and the O.K. Telephone Credit Authorization system for verifying credit in department stores. Deeds and Kettering went on to found Dayton Engineering Laboratories Company which later became the Delco Electronics Division of General Motors. In 1913, the company's market share was dominant and it was successfully prosecuted under the Sherman Antitrust Act of 1890. The ruling was appealed and executives avoided at least some of the court's strictures. American Selling Force When John H. Patterson and his brother took over the company, cash registers were expensive (US$50) and only about a dozen of "Ritty's Incorruptible Cashier" machines were in use. There was little demand for the expensive device, but Patterson believed the product would sell once shopkeepers understood it would drastically decrease theft by salesclerks. He created a sales team known as the "American Selling Force" which worked on commissions and followed a standard sales script, the "N.C.R. Primer." This was the first known sales training manual in existence. The philosophy was to sell a business function rather than just a piece of machinery. Sales demonstrations were set up in hotels (away from the distractions of the buyer's business) depicting a store interior complete with real merchandise and real cash. The sale prospect was described as the "P.P." or "Probable Purchaser." Once initial objections were swept aside and the P.P. admitted to internal theft losses, the product was demonstrated along with large business charts and diagrams. The deal was sealed with a 25 cent cigar. Patterson also invented the formal sales training academy, a summer event first set up in canvas tents and called “Sugar Camp.” The first known form of direct mail advertising also came courtesy of Patterson, who sent mail pieces to a predetermined list of addresses about his products. Patterson's “Get a Receipt” campaign was one of the world's first advertising campaigns. Welfare work NCR undertook extensive welfare work and was referred to as "America's model factory." Some historians have referred to company owner John Patterson as the "father of industrial welfare." The company had its own welfare department and is considered a pioneer in America for this work. Some of the company's welfare initiatives include safety devices, drinking fountains, baths, lockers, chairs and back support for machine operators, indoor bathrooms and a ventilation system to provide clean air. There were special provisions for women employees including restrooms, shorter work hours, high-back chairs, a women's dining room, and lessons in domestic science. In 1893, NCR constructed the first "daylight factory" buildings with floor-to-ceiling glass windows that let in light and could be opened to let in fresh air as well. Expansion NCR expanded quickly and became multi-national in 1888. Between 1893 and 1906 it acquired a number of smaller cash register companies. By 1911 it had sold one million machines and grown to almost 6,000 employees. Combined with rigorous legal attacks, Patterson's methods enabled the company to fight off bankruptcy, buy-out over 80 of its early competitors, and achieve control of 95% of the U.S. market. In 1912 the company was found guilty of violating the Sherman Antitrust Act. Patterson, Deeds, Watson and 25 other NCR executives and managers were convicted of illegal anti-competitive sales practices and were sentenced to one year of imprisonment. Their convictions were unpopular with the public due to the efforts of Patterson and Watson to help those affected by the Dayton, Ohio, floods of 1913, but efforts to have them pardoned by President Woodrow Wilson were unsuccessful. However, their convictions were overturned on appeal in 1915 on the grounds that important defense evidence should have been admitted. Two million units were sold by 1922, the year John Patterson died. In 1925, NCR went public with an issue of $55 million in stock, at that time the largest public offering in United States history. During the first World War, NCR manufactured fuses and aircraft instrumentation, and during World War II built aero-engines, bomb sights and code-breaking machines, including the American bombe designed by Joseph Desch. US Navy Bombe, code breaking machine The US Navy Bombe was built by NCR for the United States Naval Computing Machine Laboratory to decrypt the Enigma machine that encrypted German military messages. The NCR made American bombes (decryption machines) were faster, and soon more available, than the British bombes at Bletchley Park and its outstations. The American bombe was essentially the same as the English bombe, though it functioned better as they were not handicapped by having to make it, as Keen was forced to do owing to production difficulties, on the framework of a 3 wheel machine. By late autumn 1943 new American machines were coming into action at the rate of about 2 a week, the ultimate total being in the region of 125. Post-war Building on its wartime experience with secret communication systems, high speed counters and cryptanalytic equipment, NCR became a major post-war force in developing new computing and communications technology. In 1953 chemists Barrett K. Green and Lowell Schleicher of NCR in Dayton submitted a patent "Pressure responsive record materials" for a carbon-less copy paper. This became US Patent 2,730,457 and was commercialized as "NCR Paper." In February 1953, the company acquired the Computer Research Corporation (CRC), after which it created a specialized electronics division. In 1956, NCR introduced its first electronic device, the Class 29 Post-Tronic, a bank machine using magnetic stripe technology. With the General Electric Company (now known as GE ), the company manufactured its first transistor-based computer in 1957, the NCR 304. Also in the 1950s NCR introduced MICR (magnetic ink character recognition) and the NCR 3100 accounting machines. In 1962, NCR introduced the NCR-315 Electronic Data Processing System which included the CRAM storage device, the first automated mass storage alternative to magnetic tape libraries accessed manually by computer operators. The NCR 390 and 500 computers were also offered to customers who did not need the full power of the 315. The NCR 390 accepted four types of input: magnetic ledger cards, punched cards, punched tape, and keyboard entry, with a tape read speed of 400 characters a second. The company's first all-integrated circuit computer was the Century 100 of 1968. The Century 200 was added in 1970. The line was extended through the Century 300. The Century series was followed by the Criterion series in 1976, NCR's first virtual machine system. During this period, NCR also produced the 605 minicomputer for in-house use. It was the compute engine for the 399 and 499 accounting machines, several generations of in-store and in-bank controllers, and the 82xx/90xx IMOS COBOL systems. The 605 also powered peripheral controllers, including the 658 disk subsystem and the 721 communications processor. In 1974, NCR developed scanners and computers marked the first occasion where items with the Universal Product Code (UPC) was scanned at the checkout of a supermarket, Troy's Marsh Supermarket in Troy, Ohio, a few miles away from NCR's Dayton Headquarters. It was treated as a ceremonial occasion and involved a little bit of ritual. The night before, a team of Marsh's supermarket staff had moved in to put bar codes on hundreds of items in the store while NCR installed their scanners and computers. In 1982, NCR's Peripheral Products Division in Wichita, Kansas, together with peripheral manufacturer, Shugart Associates, helped propel the computer industry into a new era of intelligent standardized peripheral communications with the development the Small Computer System Interface (SCSI). The SCSI standard enabled such diverse devices as disks, tapes, printers, and scanners to share a common interface to one or more computer systems in a way that was never before possible and a model for subsequent interfaces to follow. NCR developed the world's first SCSI interface chip based on the SCSI interface standard collaboratively developed. By 1986, the number of American mainframe makers had dropped from 8 (IBM and the "seven dwarfs") to 6 (IBM and the "BUNCH") and then to 4: IBM, Unisys, NCR, and Control Data Corporation. The company adopted the name NCR Corporation in 1974. Small computers In 1982, NCR became involved in open systems architecture. Its first such system was the UNIX-powered TOWER 16/32, the success of which (approximately 100,000 were sold) established NCR as a pioneer in bringing industry standards and open systems architecture to the computer market. These 5000-series systems were based on Motorola 68k CPUs and supported NCR's proprietary transaction processing system TMX, which was mainly used by financial institutions. This product line also saw the first time NCR had offered its products through other than its own direct sales channels since the early 1900s. Formally added to its company structure in March 1981, NCR's OEM System's Division spearheaded the design, sales revenue and market awareness and acceptance of NCR's Tower family. Part of the cause of this success was the decision by NCR senior management to hire reseller industry veterans for key positions within the fledgling operation and have that unit work with, but not answerable to, NCR's traditional management structure. The industry shift from proprietary minicomputers brought personnel with minicomputer and reseller backgrounds such as division heads Roger Nielsen (ex-Data General), Robert Hahn (ex-Data General), and Dan Kiegler (ex-Datapoint marketing), marketing manager and later Director of Field Sales, Dave Lang (ex-DEC reseller marketing director and salesperson) and other critical contributors at corporate levels; who then hired a complementary field sales organization primarily made up of proven people from DEC, Wang and other faltering minicomputer firms. In the 1980s, NCR sold various PC compatible AT-class computers, like the small NCR-3390 (called an "intelligent terminal"). They proposed a customized version of MS-DOS named NCR-DOS, which for example offered support for switching the CPU between 6, 8 or 10 MHz speeds. The computers featured an improved CGA adapter, the NGA, which had a 640×400 text mode more suitable for business uses than the original 640×200 mode, with characters drawn using single-pixel-wide lines, giving an appearance similar to that of classic IBM 3270 terminals. The additional four-color 640×400 graphical mode was identical to CGA's 320×200 mode from a programming point of view. NCR also manufactured two proprietary series of mini-to-midrange computers: I-Series: 9010 (IDPS Operating System), 9020 and 9100 (IMOS Operating System), 9040 and 9050 (IRX Operating System), 9200 / 9300 / 9300IP / 9400 / 9400IP / 9500 / System 1000 models 35 / 55 / 65 / 75 (ITX Operating System). These were "I" (Interactive) computers allowing TTY terminals to be connected. Later models supported all industry-standard communication protocols. V-Series: 8500 (VRX Operating System) and 9800 (VRX/E Operating System). These were "V" series, comparable to mainframes, supporting "Page mode" terminals. The hardware did have similarities with the I-Series while the operating system and user interface was totally different. In 1990, NCR introduced the System 3000, a seven-level family of computers based on Intel's 386 and 486 CPUs. The majority of the System 3000 range utilised IBM's Micro Channel architecture rather than the more prevalent ISA architecture, and utilised SCSI peripherals as well as the more popular parallel and serial port interfaces, resulting in a premium product with premium pricing. The 3600, through NCR subsidiary Applied Digital Data Systems supported both the Pick Operating System and Prime Information. The 1970s saw the widespread installations of the Model 770 in National Westminster and Barclays banks throughout the UK, but it was not until the Model 5070, developed at its Dundee plant in Scotland and introduced in 1983 that the company began to make more serious inroads into the ATM market. Subsequent models included the 5084, and 58xx (Personas) series. In early 2008, the company launched its new generation of ATMs—the 662x/663x SelfServ series. NCR currently commands over a third of the entire ATM market, with an estimated $18 trillion being withdrawn from NCR ATMs every year. In addition, NCR's expertise in this field led the company to contract with the U.S. Military to support the Eagle Cash program with customized ATMs. NCR 5xxx series The NCR 5xxx-series is the range of (ATMs) produced by NCR from the early 1980s. Most models were designed and initially manufactured at its Dundee factory in Scotland, but later produced at several other locations around the world. There have been several distinct generations: 50xx-series; The initial models introduced in 1983 were the 5070 (interior vestibule) and 5080 (Through The Wall or TTW) introduced a number of features which have become standard among ATMs. Most notably, the individual functions of the ATM are divided among discrete modules which can be easily removed and replaced for repair or replenishment. The 5080 featured the standard anti-vandal smoked perspex screen which covered the keypad and screen until the cardholder inserted their card. The enhanced 5084 TTW model appeared in 1987, and had an improved anti-vandal fascia and was the first ATM to dispense with the need for the retracting perspex screen. The 5085 offered the first crude deposit function; with the machine supplying the deposit envelopes which were subsequently stored in the machine's safe for subsequent back office processing. 56xx-series; produced from 1991 to 1997. Enhanced functions such as color displays and improved security and usability functions became available. The introduction of Media Entry Indicators (MEI) which highlight the card entry slot to the customer was also a part of this series. Some 56xx machines produced between 1994–1996 were badged as "AT&T" rather than "NCR", mirroring the company's brief ownership under the telecoms giant in the mid-1990s. 56xx models have included the 5670 (interior lobby cash dispense only), 5675 (interior lobby multifunction—dispense & deposit), 5684 (exterior TTW dispense only), 5688 (exterior TTW drive-up multifunction) and 5685 (exterior TTW multifunction). 58xx-series marketed as Personas from 1998 to the present. These models were characterised by the gradual move towards greater ATM functionality including intelligent, envelopeless deposit by means of automated cheque recognition modules, coin dispense, and electronic cash recognition functions which allows bank customers to deposit cash and cheques with instant processing of the transaction. The 58xx series has also been characterised by the gradual introduction of LCD displays instead of the traditional CRT monitor. Models have included the 5870 (compact interior lobby dispense only), 5873 (interior lobby with cash accept & deposit only), 5874 (Exterior TTW cash dispense), 5875 (Multifunction TTW). The latest TTW versions of the Personas line, introduced in 2000 and marketed as M-Series added functions such as cash recycling, coin dispense, barcode reading, a larger 12" LCD display with touchscreen option, and for the first time, a common wall footprint for both the Multifunction (5886) or single function (5887). NCR 66XX series NCR's 6th generation of ATMs have been noted for the further move towards intelligent deposit and the expansion of secondary functions such as barcode reading. 667x-series marketed under the Personas M-Series brand were introduced in 2005 to the present. These models consist of the 6676 (interior lobby multifunction) and 6674 (through-the-wall multifunction). The outlook design is very different from the Personas model; on the front-access 6676s the front cover is opened upwards which claim to be saving the services area. NCR Self-Serv 20 and 30 series NCR's latest ATM services, introduced in 2008. This series is a complete redesign of both outlook and technological contents. It is also a cost down product. Self-Serv 20 series are single-function (e.g. cash-out) ATMs, while Self-Serv 30 series are full-function (cash-out and intelligent deposit) machines. AT&T Teradata Teradata partnered with NCR in 1990 and was purchased by NCR in 1991. Mark Hurd took over the company's Teradata division in 1999 and is credited with expanding NCR's Teradata business. Hurd streamlined operations and invested in research. The Teradata division at NCR became profitable in 2002. Acquisition NCR was acquired September 19, 1991 by AT&T Corporation for $7.4 billion and was joined with Teradata Corporation on February 28, 1992. As an AT&T subsidiary, its 1992 year-end headcount was 53,800 employees and contractors. By 1993, the subsidiary produced a year-end $1.287 billion net loss on $7.265 billion in revenue. The net losses continued in 1994 and 1995, losses that required repeated subsidies from the parent company and resulted in a 1995 year-end headcount of 41,100. During these three years, AT&T was the former NCR's largest customer, accounting for over $1.5 billion in revenue. On February 15, 1995, the company sold its microelectronics division and storage systems division to Hyundai which named it Symbios Logic. At the time it was the largest purchase of an American company by a Korean company. For a while, starting in 1994, the subsidiary was renamed AT&T Global Information Solutions, but in 1995, AT&T decided to spin off the company, and in 1996, changed its name back to NCR in preparation for the spin-off. The company outlined its reasons for the spin-off in an Information Statement sent to its stockholders, which cited, in addition to "changes in customer needs" and "need for focused management time and attention", the following: ...[A]dvantages of vertical integration [which had motivated ATT's earlier acquisition of NCR] are outweighed by its costs and disadvantages....[T]o varying degrees, many of the actual and potential customers of Lucent and NCR are or will be competitors of AT&T's communications services businesses. NCR believes that its efforts to target the communications industry have been hindered by the reluctance of AT&T's communications services competitors to make purchases from an AT&T subsidiary. NCR re-emerged as a stand-alone company on January 1, 1997. Independence One of NCR's first significant acquisitions after becoming independent from AT&T came in July 1997, when it purchased Compris Technologies, a privately held company in Kennesaw, Georgia that produced software for restaurant chains. In November 1997, NCR purchased Dataworks Inc., a 60-person privately held company in San Antonio, Texas. The Montgomery County Historical Society and NCR Corporation joined in 1998 into a partnership committed to preserving the historic and voluminous NCR Archive. In 1999, NCR moved an estimated three million items from NCR's Building 28 into the Historical Society's Research Center. In 1998, NCR sold its computer hardware manufacturing assets to Solectron and ceased to produce general-purpose computer systems, focusing instead on the retail and financial industries. In 2000, NCR acquired customer relationship management provider Ceres Integrated Solutions and services company 4Front Technologies. Recent acquisitions include self-service companies Kinetics, InfoAmerica and Galvanon, and software company DecisionPoint. In April 2003, NCR purchased Copient Technologies, an Indiana-based retail marketing software company. CEO Lars Nyberg announced his resignation from the company in February 2003 in order to address family matters. NCR promoted Mark Hurd to replace Nyberg as CEO in March 2003. Early on in his new role, Hurd made changes in order to cut costs, including layoffs and converting an executive parking lot into an ATM training center. Within his first year as CEO, the company's stock doubled and NCR became a market leader in ultra high-end data-warehousing software. Bill Nuti's management In 2006, NCR acquired software company IDVelocity and the ATM manufacturing division of Tidel, a cash security equipment manufacturer specializing in retail markets. On January 8, 2007, NCR announced its intention to separate into two independent companies by spinning off Teradata to shareholders. Bill Nuti would continue his role as president and CEO of NCR, while Teradata Senior VP Mike Koehler would assume leadership of Teradata. On October 1, 2007, NCR Corporation and Teradata jointly announced the Teradata business unit spin-off was complete, with Michael Koehler as the first CEO of Teradata. On January 11, 2007, NCR announced plans to restructure its entire ATM manufacturing operations, with 650 jobs at its Dundee plant being cut. A further 450 jobs were cut in Waterloo, Ontario, Canada. In 2009, the Dundee manufacturing facility was closed, along with plants in São Paulo and Bucharest, citing global economic conditions. NCR extended its self-service portfolio into the digital media market with the January 2007 announcement of NCR Xpress Entertainment, a multichannel entertainment kiosk. NCR's acquisition of Touch Automation LLC was announced on December 31, 2007. On October 15, 2008, NCR announced a global reseller partnership with Experticity, a Seattle based software company. In 2009, NCR relocated its corporate headquarters from Dayton, Ohio to near Duluth, Georgia; Dayton had served as NCR's home for 125 years. In 2009, NCR became the second largest DVD Kiosk operator in North America with the acquisitions of The New Release and DVD Play. In 2010, NCR completed the acquisition of digital signage company, Netkey. In August 2011, NCR purchased Radiant Systems, a hospitality and retail systems company, for US$1.2 billion. Radiant's hospitality division turned into a new Hospitality Line of Business within NCR. Radiant's petroleum and convenience retail business became part of its retail line of business. Several Radiant executives remained on board, including Scott Kingsfield, who was a general manager of NCR's Retail Line of Business and left NCR in 2014, and Andy Heyman, who became general manager of NCR's Financial Services line of business. In August 2012, the company was hit with charges of avoiding U.S. economic sanctions against Syria, greatly affecting its stock price. In February 2013, NCR completed its acquisition of Retalix (NASDAQ: RTLX), a provider of retail software and services, for approximately $650 million in cash. In January 2014, NCR completed its acquisition of Digital Insight Corporation, a provider of online and mobile banking to mid-market financial institutions, from equity firm Thoma Bravo, LLC for $1.65 billion in cash. In September 2016, Mark Benjamin was named president and chief operating officer of NCR. Benjamin is a 24-year veteran of human resources management and will report directly to Bill Nuti. Relocation and recent history In January 2018, NCR relocated its corporate headquarters from near Duluth, Georgia to a new office in Midtown Atlanta. NCR's mailing address is 864 Spring St NW, Atlanta, GA 30308. In April 2018, Mike Hayford was named CEO. He leads the company's strategic shift from hardware provider to software- and services-led enterprise technology provider. In 2019, NCR announced plans to start building a campus in Belgrade, Serbia. In January 2021, NCR reached an agreement to acquire ATM operator Cardtronics in a deal valued at $2.5 billion. In October 2021, NCR opened in New Belgrade, Serbia the largest IT center and campus in Europe. Products and services NCR's R&D activity is split between its three major centers in Atlanta (retail); Dundee, Scotland (financial industry); and Waterloo, Ontario. It also has R&D centers in Beijing; Cebu, Philippines; Belgrade, Serbia; Banja Luka, Bosnia and Herzegovina and Puducherry Chengalpattu and Hyderabad, India. NCR also has manufacturing facilities in Beijing, Budapest, and the Indian territory of Puducherry and Chengalpattu which is a regional manufacturing and export hub. Hardware Item Processing platforms (mainly checks) (7780, iTRAN 8000, TS) PCs (System 3000) Point of Sale (POS) for retail and food service POS Displays POS Printers POS Touch Screens POS Terminals NCR Silver, complete point-of-sale that runs on iPad, iPhone or iPod touch device POS Self Checkout (NCR SelfServ Checkout, formerly NCR FastLane) POS Scanners Self-service hardware, ATMs and kiosks (EasyPoint, Personas, SelfServ) Servers (S1600, S2600, System 5000, Tower) Petroleum POS Optic 12, Optic 5 Services E-business Education IT infrastructure services Managed services Payment Retail Self-service Obsolete Class 1000 register Class 2000 bank posting machine (c. 1922–1973) NCR 2170 Retail System point of sale terminal and software NCR Voyager, an i386 SMP computer platform that preceded Intel's SMP specification Electronic shelf labels (RealPrice – discontinued 2008) EasyPoint Mini, a touchscreen device originated by Copient Technologies Senior management CEO: Michael Hayford (April 2018 – present) CEO: Bill Nuti (August 2005 – 2018) CEO: Mark Hurd (2003–2005) CEO: Lars Nyberg (1996–2003) CEO: Jerre Stead (1993–1995) company renamed AT&T GIS CEO: Charles E. Exley, Jr. (1983–1993) CEO: William S. Anderson (1973–1984) CEO: Robert S. Oelman (1962–1973) CEO: Stanley C. Allyn (1957–1962) CEO: Edward A. Deeds (1931–1957) CEO: Frederick Beck Patterson (1922–1931) CEO: John H. Patterson (1884–1922) Interim CEO: Jim Ringler (2005) Interim CEO: Bill O'Shea (1995) Interim CEO: Gil Williamson (1993) See also NCR Book Award References External links NCR IPS UK Report of Dundee Redundancies Information on early National registers Dayton's Code Breakers The History of Computing Project: NCR Timeline The Core Memory Project: NCR Computers of the 20th Century Decision Mate V American companies established in 1884 1884 establishments in Ohio Manufacturing companies based in Atlanta Electronics companies established in 1884 Companies listed on the New York Stock Exchange Computer companies of the United States Electronics companies of the United States Financial technology companies Former components of the Dow Jones Industrial Average Point of sale companies Superfund sites in Delaware History of Dayton, Ohio 1991 mergers and acquisitions Former AT&T subsidiaries
145540
https://en.wikipedia.org/wiki/Aldus%20Corporation
Aldus Corporation
Aldus Corporation was a software company that developed desktop publishing (DTP) software. It is known for developing PageMaker, an early product in the desktop publishing field. The company is named after 15th-century Venetian printer Aldus Manutius, and was founded by Jeremy Jaech, Mark Sundstrom, Mike Templeman, Dave Walter, and chairman Paul Brainerd. Aldus Corporation was based in Seattle, Washington. History PageMaker was released in July 1985 and relied on Adobe's PostScript page description language. For output, it used the Apple LaserWriter, a PostScript laser printer. PageMaker for the PC was released in 1986, but by then the Mac was the de facto DTP platform, with Adobe Illustrator (released in 1987) and Adobe Photoshop (released in 1990) completing the suite of graphic design software. In 1988, Aldus went on to offer its Illustrator-like program FreeHand, licensed from Altsys (who also developed Fontographer). FreeHand and Illustrator competed with each other for years through multiple releases. This rivalry continued even after the Aldus acquisition, because FreeHand was not included, but Adobe eventually acquired Freehand in 2005 with its acquisition of Macromedia. FreeHand MX was the last version offered by Adobe but is no longer sold or updated. In early 1990, Aldus bought Silicon Beach Software, acquiring a number of consumer titles for the Macintosh, including SuperPaint, Digital Darkroom, SuperCard, Super3D, and Personal Press (later renamed Adobe Home Publisher). Silicon Beach was located in San Diego, California, and became the Aldus Consumer Division. In 1993, Aldus bought After Hours Software and incorporated its products, TouchBase Pro and DateBook Pro, into the Aldus Consumer Division. The same year, it acquired Company of Science and Art (CoSA). During the 1990s, QuarkXPress steadily won ground from PageMaker, and it seemed increasingly odd that Adobe — who had created PostScript, so vital to the working of DTP — still did not offer its own page layout application. This was resolved in September 1994 when Adobe purchased Aldus for $446 million. After two more major releases, PageMaker has been discontinued in 2001 and is no longer supported; existing PageMaker customers were urged to switch to InDesign, released in 1999. Aldus developed the TIFF and OPI industry standards. The three founders of Visio Corporation left Aldus in 1990 to create the product which later became known as Microsoft Office Visio. Company name "Paul Brainerd and his partners decided to name their company Aldus, after Aldus Pius Manutius (Teobaldo Mannucci) (1449–1515), a famous fifteenth-century Venetian pioneer in publishing, known for standardizing the rules of punctuation and also presenting several typefaces, including the first italic. Manutius went on to found the first modern publishing house, the Aldine Press." Products Print publishing PageMaker — A desktop publishing program Prepress ColorCentral — An OPI server PressWise — A digital imposition program PrintCentral — A print output spooler TrapWise — A digital trapping program Graphics FreeHand — A vector drawing program Gallery Effects Persuasion — A presentation program PhotoStyler — A bitmap image editor TextureMaker — A program for creating textures/patterns SuperPaint — Painting and vector drawing program Intellidraw — A powerful yet simple drawing program Aldus Interactive Publishing/CoSA After Effects — A digital motion graphics and compositing program Hitchcock — A professional non-linear video editor, with titling and A/V transitions Fetch — A multimedia database Aldus Consumer Division (formerly Silicon Beach Software and After Hours Software) Digital Darkroom photo enhancement software Personal Press consumer desktop publishing software DateBook Pro — Calendar management software IntelliDraw — A vector drawing program Super3D — 3D modeling software SuperCard multimedia authoring environment TouchBase Pro — Contact management software References External links Logo of Aldus Corporation. An article on typography briefly discussing the origin of the Aldus logo. Printed material related to Aldus products Software companies disestablished in 1994 Software companies established in 1984 Defunct companies based in Seattle Defunct software companies of the United States 1984 establishments in Washington (state) 1994 disestablishments in Washington (state) 1994 mergers and acquisitions
357094
https://en.wikipedia.org/wiki/Scheria
Scheria
Scheria or Scherie (; or ), also known as Phaeacia () or the Land of the Faiakes (Φαίακες) was a region in Greek mythology, first mentioned in Homer's Odyssey as the home of the Phaeacians and the last destination of Odysseus in his 10-year journey before returning home to Ithaca. It is one of the earliest descriptions of a utopia. It is the foundation for many ideas of utopias ranging from Shakespeare's Propsero's Island from The Tempest as a brave new world to the Land of Oz. From Ogygia to Scheria (Odysseus) Before leaving Ogygia, Odysseus builds a raft and sails eastwards, instructed by Calypso to navigate using the stars as a celestial reference point. On the eighteenth day appear the shadowy mountains of the land of the Phaeacians, that looked like a shield in the misty deep. But Poseidon spots his raft and seeking vengeance for his son Polyphemus who was blinded by Odysseus, produces a storm that torments Odysseus. After three days of struggle with the waves, he is finally washed up on Scheria. Odysseus meets Nausicaa Meanwhile, the goddess Athena sneaks into the palace, disguised as a sea-captain's daughter, and instructs princess Nausicaa (the daughter of King Alcinous) in her sleep to go to the seashore and wash her clothes. The next morning, Nausicaa and her maids go to the seashore, and after washing the clothes, start to play a game on the beach, with laughs, giggles and shouts. Odysseus, who was exhausted from his adventure and sleeping nearby, is awakened by the shouts. He covers his nakedness with thick leaves and goes to ask for help from the group. Upon seeing the unkempt Odysseus in this state, the maids run away, but, Nausicaa, encouraged by Athena, stands her ground and talks to him. To excuse the maids, she admits that the Phaeacians are "the farthermost of men, and no other mortals are conversant with them", so they run away since they have never seen a stranger before. Nausicaa, being hospitable, provides clothes, food and drink to Odysseus, and then directs him to the palace of King Alcinous. The palace of King Alcinous Following Nausicaa's instructions, Odysseus sought to enter the palace of King Alcinous and plead for mercy from the queen, Arete, so he could make his way home. On his way to the palace, Odysseus meets Athena disguised as a local girl. In her disguised state, Athena advises him about how to enter the palace. Athena, knowing that the Phaeacians were hostile towards men from the outlands, cloaked Odysseus in a mist that hid him from the Phaeacians' gaze. Under Athena's protection, Odysseus passes through all of the protection systems of the palace and enters the chamber of King Alcinous. Odysseus throws his arms around the queen's legs and supplicates her. Naturally, Alcinous and his court are surprised to see a stranger walking into their secured palace. It was only after Echeneus, a Phaeacian elder, urged King Alcinous to welcome the stranger that they offered Odysseus hospitality. The front doors of the palace are flanked with two dogs made of silver and gold, constructed by Hephaestus. The walls of the palace are made of bronze that "shines like the sun", with gates made of gold. Within the walls, there is a magnificent garden with apple, pear, and pomegranate trees that grow year-round. The palace is even equipped with a lighting system consisting of golden statues of young men bearing torches. After Odysseus tells Alcinous and his court the story of his adventures after the Trojan War, the Phaeacians take him to Ithaca on one of their ships. The Phaeacian ships The Phaeacians possessed remarkable ships. They were quite different from the penteconters, the ships used during the Trojan War, and they were steered by thought. King Alcinous says that Phaeacians carried Rhadamanthus to Euboea, "which is the furthest of any place" and came back on the same day. He also explains to Odysseus what sort of information the Phaeacian ships require in order to take him home to Ithaca. Homer describes the Phaeacian ships as fast as a falcon and gives a vivid description of the ship's departure. Geographical location of Scheria Many ancient and modern interpreters favour identification of Scheria with the island of Corfu, which is within 110 km (68 miles) of Ithaca. Thucydides, in his Peloponnesian War, identifies Scheria as Corfu or, with its ancient name, Corcyra. In I.25.4, he records the Corinthians' resentment of the Corcyraeans, who "could not repress a pride in the high naval position of an island whose nautical renown dated from the days of its old inhabitants, the Phaeacians." Locals on Corfu had long claimed this, based on the rock off the west coast of the island, which is supposedly the ship that carried Odysseus back to Ithaca, but was turned to stone by Poseidon, to punish the Phaeacians for helping his enemy, The Phaeacians did not participate in the Trojan War. The Greek name Φαίακες is derived from phaiós (φαιός “gray”). The Phaeacians in the Odyssey did not know Odysseus (although they knew of him, as evidenced by the tales of Demodocus), so they called him a "stranger". Odysseus however was the king of the majority of the Ionian Islands, not only of Ithaca, but also "of Cephallenia, Neritum, Crocylea, Aegilips, Same and Zacynthus" so if Scheria was Corfu, it would be surprising that the citizens of one of the Ionian Islands did not know Odysseus. Furthermore, when Odysseus reveals his identity, he says to the nobles: "[…] if I outlive this time of sorrow, I may be counted as your friend, though I live so far away from all of you" indicating that Scheria was far away from Ithaca. Many characteristics of the Phaeacians, including their seafaring and relaxed lifestyle are suggestive of Minoan Crete. Aside from the seafaring prowess, the palace walls that shone like the Sun are read to be covered not by bronze but orichalcum. The latter similarities make Scheria also suggestive of Plato's account of Atlantis. Helena Blavatsky proposed in her Secret Doctrine (1888) that it was Homer before Plato who first wrote of Atlantis. From the ancient times, some scholars having examined the work and the geography of Homer have suggested that Scheria was located in the Atlantic Ocean. Among them were Strabo and Plutarch. Geographical account by Strabo Approximately eight centuries after Homer, the geographer Strabo criticized Polybius on the geography of the Odyssey. Strabo proposed that Scheria and Ogygia were located in the middle of the Atlantic Ocean. Notes External links Odyssey by Homer Homer's Odyssey resources on the Web Strabo: The Geography Atlantis, Poseidonis, Ogygia and Scheria (on page 8) Mythological islands Geography of the Odyssey Locations in Greek mythology Corcyraean mythology
1435171
https://en.wikipedia.org/wiki/X%20display%20manager
X display manager
In the X Window System, an X display manager is a graphical login manager which starts a login session on an X server from the same or another computer. A display manager presents the user with a login screen. A session starts when a user successfully enters a valid combination of username and password. When the display manager runs on the user's computer, it starts the X server before presenting the user the login screen, optionally repeating when the user logs out. In this condition, the DM realizes in the X Window System the functionality of and on character-mode terminals. When the display manager runs on a remote computer, it acts like a telnet server, requesting username and password and starting a remote session. X11 Release 3 introduced display managers in October 1988 with the aim of supporting the standalone X terminals, just coming onto the market. Various display managers continue in routine use to provide a graphical login prompt on standalone computer workstations running X. X11R4 introduced the X Display Manager Control Protocol (XDMCP) in December 1989 to fix problems in the X11R3 implementation. History XDM (the X Window Display Manager) originated in X11R3. This first version, written by Keith Packard of the MIT X Consortium, had several limitations, the most notable of which was that it could not detect when users switched X terminals off and on. In X11R3, XDM only knew about an X terminal from its entry in the file, but XDM only consulted this file when it started. Thus every time a user switched a terminal off and on, the system administrator had to send a SIGHUP signal to XDM to instruct it to rescan . XDMCP arrived with the introduction of X11R4 (December 1989). With XDMCP, the X server must actively request a display manager connection from the host. An X server using XDMCP therefore no longer requires an entry in . Local and remote display management A display manager can run on the same computer where the user sits—starting one or more X servers, displaying the login screen at the beginning and (optionally) every time the user logs out—or on a remote one, working according to the XDMCP protocol. The XDMCP protocol mandates that the X server starts autonomously and connects to the display manager. In the X Window System paradigm, the server runs on the computer providing the display and input devices. A server can connect, using the XDMCP protocol, to a display manager running on another computer, requesting it to start the session. In this case, the X server acts as a graphical telnet client while the display manager acts like a telnet server: users start programs from the computer running the display manager, while their input and output take place on the computer where the server (and the user) sits. An administrator can typically configure an XDMCP Chooser program running on the local computer or X terminal to connect to a specific host's X display manager or to display a list of suitable hosts that the user can choose from. Most implementations enable such a list to contain: a predefined set of hosts and their respective network addresses, and/or a set of hosts (on the local TCP/IP subnet) that the XDMCP Chooser determines by a network broadcast to the available display managers. When the user selects a host from the list, the XDMCP Chooser running on the local machine will send a message to the selected remote computer's display manager and instruct it to connect the X server on the local computer or terminal. X Display Manager Control Protocol The X Display Manager Control Protocol (XDMCP) uses UDP port 177. An X server requests that a display manager start a session by sending a Query packet. If the display manager allows access for that X server, it responds by sending a Willing packet back to the X server. (The X server can also send BroadcastQuery or IndirectQuery packets to start a session - this mechanism for requesting a session resembles using DHCP to request an IP address.) The display manager must authenticate itself to the server. To do this the X server sends a Request packet to the display manager, which returns an Accept packet. If the Accept packet contains the response the X server expects, the display manager is authenticated. Producing the correct response might require the display manager to have access to a secret key, for example. If authentication succeeds, the X server sends a Manage packet to inform the display manager. Then the display manager displays its login screen by connecting to the X server as a regular X client. During the session, the server can send KeepAlive packets to the display manager at intervals. If the display manager fails to respond with an Alive packet within a certain time, the X server presumes that the display manager has ceased running, and can terminate the connection. Security One problem with XDMCP is that, similarly to telnet, the authentication takes place unencrypted. If snooping is possible, this leaves the system vulnerable to attack. It is more secure to use an ssh tunnel for X traffic. Implementations The X Window System supplies XDM as its standard display manager. Programmers have developed other X display managers, both commercial and free, offering additional functionality over the basic display management: Active GDM, GNOME implementation SDDM, recommended display manager for KDE Plasma 5 and LXQt. Successor to KDM. LightDM, a lightweight, modular, cross-desktop, fully themeable desktop display manager by Canonical Ltd. TWin, the TDE window manager xlogin display manager, a lightweight, secure and login like console display manager for X, written in C. Inactive KDM (part of KDE) allows the user to graphically select a window manager or desktop environment in the login screen Qingy ultralight and very configurable graphical login independent on X Window (uses DirectFB) XDM-OPTIONS for XDM. Easy full install, Xhost Phonebook, X Login, X Desktop Chooser, menu-reconfig, repair utils. LDM, the (remote) Display Manager of the Linux Terminal Server Project MDM, a graphical display manager developed for Linux Mint. dtlogin (shipped with CDE) (provided by SCO Open Desktop) also checks for expired passwords and performs some administrative tasks WINGs Display Manager (using the WINGs widget-set used in Window Maker) entranced/entrance (employs the architecture used in Enlightenment v.17, on hiatus since 2005) LXDM, a lightweight cross-desktop and fully themeable display manager, part of LXDE SLiM, an independent login manager. CDM, an ultralight Console Display Manager for Unix xlogin, X Window login with separate XDMCP server Enter, a lightweight graphical login manager Orthos, another lightweight solution with very configurable animated themes that use OpenGL only nodm, auto-login display manager for systems like kiosks, appliances and mobile phones On some Unix distributions, the default display manager is selected in file $PREFIX/etc/X11/default-display-manager. See also Login manager X Window System protocols and architecture Sources XDMCP specification, from the X.Org release documentation XDM manual page (XFree86.org) Linda Mui and Eric Pearce, X Window System Volume 8: X Window System Administrator's Guide for X11 Release 4 and Release 5, 3rd edition (O'Reilly and Associates, July 1993; softcover ) References External links Linux XDMCP HOWTO Taming The X Display Manager The X Display Manager, from the FreeBSD Handbook Linux login with a Windows box and XDMCP A guide to logging into linux using windows. X Window System
788649
https://en.wikipedia.org/wiki/Zabbix
Zabbix
Zabbix is an open-source software tool to monitor IT infrastructure such as networks, servers, virtual machines, and cloud services. Zabbix collects and displays basic metrics. Description Zabbix is designed primarily as an IT infrastructure monitoring tool. New features are generally released every six months to major versions and every 1.5 years to LTS versions. Released under the terms of GNU General Public License version 2, Zabbix is free software that does not require a license to use any of its features. Even though Zabbix is open-source software, it is a closed development software product, developed by Zabbix LLC based in Riga Latvia. Early in its history, Zabbix was described as simple to set up compared to other monitoring solutions. However, later it was considered by some to need a significant amount of manual configuration. Development The first stable version, 1.0, was released in 2004. Since the first stable version was released as 1.0, Zabbix versioning has used minor version numbers to denote major releases. Each minor release implements many new features, while change level releases mostly introduce bugfixes. Zabbix version numbering scheme has changed over time. While the first two stable branches were 1.0 and 1.1, after 1.1 it was decided to use odd numbers for development versions and even numbers for stable versions. As a result, 1.3 followed 1.1 as a development update to be released as 1.4. See also Comparison of network monitoring systems List of systems management systems References Further reading Vidmar, Anže (March 12, 2007). ZABBIX: State-of-the-art network monitoring Linux.com Ramm, Mark (March 15, 2005). The Watcher Knows, Linux Magazine Schroder, Carla (May 24, 2005). Monitor Your Net with Free, High-Performance ZABBIX, Enterprise Networking Planet External links Network management Internet Protocol based network software Free network management software Multi-agent systems 2001 software System monitors Management systems Systems management
507421
https://en.wikipedia.org/wiki/Film%20recorder
Film recorder
A film recorder is a graphical output device for transferring images to photographic film from a digital source. In a typical film recorder, an image is passed from a host computer to a mechanism to expose film through a variety of methods, historically by direct photography of a high-resolution cathode ray tube (CRT) display. The exposed film can then be developed using conventional developing techniques, and displayed with a slide or motion picture projector. The use of film recorders predates the current use of digital projectors, which eliminate the time and cost involved in the intermediate step of transferring computer images to film stock, instead directly displaying the image signal from a computer. Motion picture film scanners are the opposite of film recorders, copying content from film stock to a computer system. Film recorders can be thought of as modern versions of Kinescopes. Design Operation All film recorders typically work in the same manner. The image is fed from a host computer as a raster stream over a digital interface. A film recorder exposes film through various mechanisms; flying spot (early recorders); photographing a high resolution video monitor; electron beam recorder (Sony HDVS); a CRT scanning dot (Celco); focused beam of light from a light valve technology (LVT) recorder; a scanning laser beam (Arrilaser); or recently, full-frame LCD array chips. For color image recording on a CRT film recorder, the red, green, and blue channels are sequentially displayed on a single gray scale CRT, and exposed to the same piece of film as a multiple exposure through a filter of the appropriate color. This approach yields better resolution and color quality than possible with a tri-phosphor color CRT. The three filters are usually mounted on a motor-driven wheel. The filter wheel, as well as the camera's shutter, aperture, and film motion mechanism are usually controlled by the recorder's electronics and/or the driving software. CRT film recorders are further divided into analog and digital types. The analog film recorder uses the native video signal from the computer, while the digital type uses a separate display board in the computer to produce a digital signal for a display in the recorder. Digital CRT recorders provide a higher resolution at a higher cost compared to analog recorders due to the additional specialized hardware. Typical resolutions for digital recorders were quoted as 2K and 4K, referring to 2048×1366 and 4096×2732 pixels, respectively, while analog recorders provided a resolution of 640×428 pixels in comparison. Higher-quality LVT film recorders use a focused beam of light to write the image directly onto a film loaded spinning drum, one pixel at a time. In one example, the light valve was a liquid-crystal shutter, the light beam was steered with a lens, and text was printed using a pre-cut optical mask. The LVT will record pixel beyond grain. Some machines can burn 120-res or 120 lines per millimeter. The LVT is basically a reverse drum scanner. The exposed film is developed and printed by regular photographic chemical processing. Formats Film recorders are available for a variety of film types and formats. The 35mm negative film and transparencies are popular because they can be processed by any photo shop. Single-image 4×5 film and 8×10 are often used for high-quality, large format printing. Some models have detachable film holders to handle multiple formats with the same camera or with Polaroid backs to provide on-site review of output before exposing film. Uses Film recorders are used in digital printing to generate master negatives for offset and other bulk printing processes. For preview, archiving, and small-volume reproduction, film recorders have been rendered obsolete by modern printers that produce photographic-quality hardcopies directly on plain paper. They are also used to produce the master copies of movies that use computer animation or other special effects based on digital image processing. However, most cinemas nowadays use Digital Cinema Packages on hard drives instead of film stock. Computer graphics Film recorders were among the earliest computer graphics output devices; for example, the IBM 740 CRT Recorder was announced in 1954. Film recorders were also commonly used to produce slides for slide projectors; but this need is now largely met by video projectors that project images directly from a computer to a screen. The terms "slide" and "slide deck" are still commonly used in presentation programs. Current uses Currently, film recorders are primarily used in the motion picture film-out process for the ever increasing amount of digital intermediate work being done. Although significant advances in large venue video projection alleviates the need to output to film, there remains a deadlock between the motion picture studios and theater owners over who should pay for the cost of these very costly projection systems. This, combined with the increase in international and independent film production, will keep the demand for film recording steady for at least a decade. Key manufacturers Traditional film recorder manufacturers have all but vanished from the scene or have evolved their product lines to cater to the motion picture industry. Dicomed was one such early provider of digital color film recorders. Polaroid, Management Graphics, Inc, MacDonald-Detwiler, Information International, Inc., and Agfa were other producers of film recorders. Arri is the only current major manufacturer of film recorders. Kodak Lightning I film recorder. One of the first laser recorders. Needed an engineering staff to set up. Kodak Lightning II film recorder used both gas and diode laser to record on to film. The last LVT machines produced by Kodak / Durst-Dice stopped production in 2002. There are no LVT film recorders currently being produced. LVT Saturn 1010 uses a LED exposure (RGB) to 8"x10" film at 1000-3000ppi. LUX Laser Cinema Recorder from Autologic/Information International in Thousand Oaks, California. Sales end in March 2000. Used on the 1997 film “Titanic”. Arri produces the Arrilaser line of laser-based motion picture film recorders. MGI produced the Solitaire line of CRT-based motion picture film recorders. Matrix, originally ImaPRO, a branch of Agfa Division, produced the QCR line of CRT-based motion picture film recorders. Celco makes a line of CRT based motion picture film recorders. Lasergraphics is a recent entrant into the cine film recorder market with its twenty-year history in the traditional film recorder business. CCG, formerly Agfa film recorders, has been a steady manufacturer of film recorders based in Germany. In 2004 CCG introduced Definity, a motion picture film recorder utilizing LCD technology. In 2010 CCG introduced the first full LED LCD film recorder as a new step in film recording. Cinevator is made by Cinevation AS, in Drammen, Norway, started shipping The Cinevator a real-time digital film recorder. It can record IN, IP and prints with and without sound Oxberry produced the Model 3100 film recorder camera system, with interchangeable pin-registered movements (shuttles) for 35mm (full frame/Silent, 1.33:1) and 16mm (regular 16, "2R"), and others have adapted the Oxberry movements for CinemaScope, 1.85:1, 1.75:1, 1.66:1, as well as Academy/Sound (1.37:1) in 35mm and Super-16 in 16mm ("1R"). For instance, the "Solitaire" and numerous others employed the Oxberry 3100 camera system. History Before video tape recorders or VTRs were invented, TV shows were either broadcast live or recorded to film for later showing, using the Kinescope process. In 1967, CBS Laboratories introduced the Electronic Video Recording format, which used video and telecined-to-video film sources, which were then recorded with an electron-beam recorder at CBS' EVR mastering plant at the time to 35mm film stock in a rank of 4 strips on the film, which was then slit down to 4 film copies, for playback in an EVR player. All types of CRT recorders were (and still are) used for film recording. Some early examples used for computer-output recording were the 1954 IBM 740 CRT Recorder, and the 1962 Stromberg-Carlson SC-4020, the latter using a Charactron CRT for text and vector graphic output to either 16mm motion picture film, 16mm microfilm, or hard-copy paper output. Later 1970 and 80s-era recording to B&W (and color, with 3 separate exposures for red, green, and blue)) 16mm film was done with an EBR (Electron Beam Recorder), the most prominent examples made by 3M), for both video and COM (Computer Output Microfilm) applications. Image Transform in Universal City, California used specially modified 3M EBR film recorders that could perform color film-out recording on 16mm by exposing three 16mm frames in a row (one red, one green and one blue). The film was then printed to color 16mm or 35mm film. The video fed to the recorder could either be NTSC, PAL or SECAM. Later, Image Transform used specially modified VTRs to record 24 frame for their "Image Vision" system. The modified 1 inch type B videotape VTRs would record and play back 24frame video at 10 MHz bandwidth, at about twice the normal NTSC resolution. Modified 24fps 10 MHz Bosch Fernseh KCK-40 cameras were used on the set. This was a custom pre-HDTV video system. Image Transform had modified other gear for this process. At its peak, this system was used in the production of the film "Monty Python Live at the Hollywood Bowl" in 1982. This was the first major pre-digital intermediate post production using a film recorder for film-out production. In 1988, companies in the United States collectively produced 715 million slides at a cost of $8.3 billion. Awards The Academy of Motion Picture Arts and Sciences awarded an Oscar to the makers of the Arrilaser film recorder. The Award of Merit Oscar from the Academy Scientific and Technical Award ceremony was given on 11 February 2012 to Franz Kraus, Johannes Steurer and Wolfgang Riedel. Steurer was awarded the Oskar Messter Memorial Medal two years later in 2014 for his role in the development of the Arrilaser. See also Film-out Tape-out Digital Intermediate Grating Light Valve References smpte.org EBR The History of Television, 1942 to 2000, By Albert Abramson, Christopher H. Sterling, page 125 Cinmatographe Recorders, Posted on 27 December 2011 Film Recorder 4 July 2011 Movie Recorder LVT Rhino Image Recorder, 2008 The Philips-Miller Film Recorder, Early Audio Recorders Cinematographe Terme Conseill Film recorders smpte.org A Short History of Television Recording: Part II, Albert Abramson kodak.com IMAGICA Film Recorder Rewrites the Speed Limit Digital Film Recorder recommended film types Using the Polaroid Film Recorder, from marietta.edu Film Recorders What is a film recorder? dslreports.com Lasergraphics Film Recorder film output faq editorsguild.com DI work, The Editors Guild Magazine, Vol. 23, No. 3 – May/June 2002 Lasergraphics specs CREATING 35mm SLIDES WITH THE FILM RECORDER – Polaroid 8000 and Polaroid 7000 SFE Developers of film recorder to receive Oscar Visual effects in a digital world By Karen E. Goulekas, page 283 4K VERSUS 8K IMAGING A brief history of scanning and recording by C Glenn Kennel, director of technology for Cinesite’s Film Scanning and Recording and Digital Mastering divisions. Film recording from pgreen.co.uk Lux Laser Cinema Recorder: Get Speed and Quality with Laser Recorders, Jan 1, 1999 12:00 PM, Tom Thill Autologic Terminates New Activities on Laser Recorder, 7 March 2000|BARBARA MURPHY Light Illusion on Film recorders How to Use the LFR Plus Film Recorder from Oregon State University Solitaire Cine III FLX Specifications imapro.com film recorders Focus on Imaging, December, 2001, Film Recorders in the 21st Century by Jack and Sue Drafahl Difference in resolution 4k and 8k with examples External links Cinevator Arri Celco Lasergraphics Definity CCG Computer graphics Film and video technology Computer printers
308765
https://en.wikipedia.org/wiki/Case%20modding
Case modding
Case modification, commonly referred to as case modding, is the modification of a computer case or a video game console chassis. Modifying a computer case in any non-standard way is considered a case mod. Modding is done, particularly by hardware enthusiasts, to show off a computer's apparent power by showing off the internal hardware, and also to make it look aesthetically pleasing to the owner. Cases may also be modified to improve a computer's performance; this is usually associated with cooling and involves changes to components as well as the case. History When personal computers first became available to the public, the majority were produced in simple, beige-colored cases. This design is sometimes referred to be as a beige box. Although this met the purpose of containing the components of the personal computer, many users considered their computers as "tacky" or "dull", and some began modifying their existing chassis, or building their own from scratch. One of the original case mods is the "Macquarium", which consists of replacing the CRT screen in a Compact Macintosh case with a fishbowl. A new market for third-party computer cases and accessories began to develop, and today cases are available in a wide variety of colors and styles. Today the business of "modding" computers and their cases is a hugely profitable endeavor, and modding competitions are everywhere. Since 2017, Computer hardware companies have started to offer some of their products with built-in RGB LED lighting, replacing earlier Non-RGB LED (single color LED) lighting. Non-RGB LED lighting started to replace earlier CCFL-based (mixed with single color LEDs) lighting, starting in the late 2000s and early 2010s. RGB lighting may be integrated onto fans, Liquid cooler pumps, RAM modules, or graphic card coolers, or they may be installed in the case itself as an RGB light strip. RGB lights may be controlled by the motherboard with an onboard lighting controller, or externally with a separate controller. They may also draw power directly from the power supply. Many cases now (as of 2019) come with side windows and RGB fans. Common modifications Appearance Peripheral mods: Peripherals like the keyboard, mouse, and speakers are sometimes painted or otherwise modified to match the computer. Some system builders, in an effort to make their system more portable and convenient, install speakers and small screens into the case. Case building: Sometime modders build entire cases from scratch. Some may attempt to treat the case as a work of art. Others make it look like or appear to be something else, like a teddy bear, wooden cabinet, a shelf mounted on a wall, or antique equipment such as a Macintosh Plus or an old Atari 2600 video game console. Relatively few case modders or builders make their computer cases from scratch; those who do sometimes put hundreds of hours into their work. The WMD case, Project Nighthawk, and Dark Blade case are a few examples of professional cases built from scratch. Component modding: This type of modding, as the name suggests, involves modifying the PC components themselves for a perceived improvement in appearance. An example is the relocation of buttons on optical drives. This is often done in combination with "stealthing", which hides the drive's visibility by masking it with a blank face. A riskier modification is installing hard drive windows which show the platters and mechanism, which must be done in a clean room without significant dust. Few people have attempted it and results seem to vary. Some hard drives, including the WD Raptor, now come with a window as standard. Laptop modding: Laptops can be modified much like a typical computer case. While most laptop mods consist of new paint or other finishes, others have chosen to engrave or cut out designs into their laptop cover (behind the screen). Laptops may also be turned into digital photo frames. These types of mods will typically void the warranty of the device. To avoid warranty issues, skins or stickers can be purchased that are easily removable from the casing. Window mods: This refers to a window placed within one of the panels of a computer case. This is most often done to the left hand side panel, and less often to the top panel. This modification is so popular that many of the major case manufacturers now offer cases with the windows pre-installed, or replaceable side panels with a window installed. Some companies even offer entire cases made out of transparent materials. A window kit may be modified to hold an LCD screen. Laser engraving can be done on acrylic windows to add a distinct look to a modded case. Lighting mods: A lighting mod refers to lighting in or on the computer cases. This is usually achieved with cold cathode lights (CCLs), LED case fans, or electroluminescent wire lights. The lights are sometimes paired with sound controllers that make the lights pulse in time to sound. CCLs come in long tubes and generally produce a little bit of heat. LEDs come in many sizes and forms, most often seen in bars similar to CCLs or within fans. Electroluminescent wire, which takes the form of a small light rope, is sometimes embedded in cables such as SATA cables. Lighting modifications are usually paired with window mods to help show off the components. Internal components such as case fans, CPU heatsink fans, and power supplies are often available with built in lighting. Paint: Painting a case is another method of distinguishing your work from others. Spray paint is the common method preferred among amateur modders. There are many spray painting guides for amateur modders. The finish cannot be compared to automotive paint or powder coating, but is a simple way to change the look of a case. Re-coloring the plastics of a case, keyboard, mouse and speakers with vinyl dye is another method of highlighting your system and making it different from the rest. Vinyl dye has the advantages of ease of use like spray paint, but is much more durable as it does not chip or scratch off, it also does not create a ‘layer’ like paint. Cable management: Routing cables, most often in computer cases, to be aesthetically pleasing is also a common practice in case modding. Similarly, covering the cables in a fabric, known as Cable Sleeving can also be undertaken to provide a more uniform look to the theme of the case. Function Cooling mods: There are many modifications that can fall into this category. The most common one is simply drilling a mount for a new fan, or removing a restrictive fan grill. Others include air ducts, water cooling, filtering, sealing openings to make air flow over hot components instead of escaping near where it entered, or even the adding of a tank of pressurized carbon dioxide or liters of mineral oil to the case. These modifications are often performed by overclockers either looking for better cooling for hot components or noise (sound) reduction. Some fan modifications are merely a show of modding skill or talent and have no functional purpose. Hardcore overclockers often install cooling systems for the sole purpose of achieving performance records. Such systems may include water cooling, phase change materials, thermoelectric/Peltiercoolers, and liquid nitrogen. Less common modifications Automotive paint & other finishes: Automotive paint refers to the paint typically seen on cars and trucks. This type of finish requires a compressed air source, such as an air compressor or CO2 tank, and a spray gun. It is more expensive than a finish using spray cans, but when done skillfully it can be better looking and much more durable. Other methods of painting can include powder coating which is highly durable though not quite as aesthetically pleasing to many modders as automotive paint. Electroplating can also be done on steel computer cases and parts. Aluminum cases can be plated or anodized as well, and processes are available to plate plastic cases. Plated coatings can range from nickel to chrome and even gold. More elaborate finishes can be crafted by using a combination of techniques, such as chrome plating with a transparent color coat. Body filler: Body filler (or Bondo) is a two-part putty often used to fix dents in automobiles. Case modders use it to fill and sculpt their own creations. When mixed with a paste catalyst the filler hardens in a short period of time and can be sanded, ground or cut to a desired shape. An alternative system uses fiberglass resin (catalyzed with liquid hardener) and either fiberglass cloth or mat to fill holes and form shapes. Lacquer based Spot Putty is often used to fill smaller imperfections before the application of primer. Typically, a case modder uses a combination of these materials to obtain the desired result. This method is usually used on the front plastic bezel of a computer case to give it a new look. Contests Many websites and companies feature contests for case modders, awarding prizes and accolades to the winners. Examples include bit-tech's Mod of the Month and Mod of the Year competitions, while some of these contests are sponsored by computer enthusiast magazines, such as CPU magazine or Custom PC Magazine, both of which have monthly modding contests. Other contests are sometimes supported by computer parts manufacturers. Console case modding Console case modding refers to the modification of the case of a game console. The most common consoles to modify are the Xbox and Xbox 360, because there is much more room inside to customize them with items such as lights and fans. Moreover, the Xbox 360 requires additional cooling over the factory configuration to avoid overheating issues due to the use of the wrong type of lead-free solder, which was not capable of handling the temperature limits standard solder can. These consoles and their controllers are also relatively easy to take apart. For those who do not wish to scratch-build mods, there are several companies that sell transparent Xbox cases and various cooling/lighting equipment for them. Console case modding started in the late 1980s when the NES and Sega Genesis, were released; many customers simply put pictures or stickers on them until the PlayStation was released. Many case modders started to change hardware, for example by altering them to play copied games (known as 'chipping' the games console). The most common modification for the PlayStation was the 'chipping' process (mentioned above). When the Nintendo 64, Dreamcast and PlayStation 2 were released, many chipped them, styled them, and added additional cooling; some went as far as changing the hardware itself. When the Xbox and Xbox 360 were released, many modders personally customized them further, using neon lights, transparent cases, fans, and PC hard drives (as opposed to Xbox-branded drives). Many modders found that altering the interior of Xbox 360s was difficult due to absence of a power cable (normally in a PC, this cord attaches the hard disk drive to its motherboard). Despite shortcomings, modders also found a way to power neon lighting and other powered equipment by using the DVD-ROM power supply; however, due to insufficient power to the hard disk drive, it often caused freezing during disk access. Another common method for internal case modding uses the power outlet for the internal fan by splitting the cord with a "Y" connector. However, the most up to date modders will use power points under where the PSU (power supply) plugs in, which doesn't diminish any (or very little) power from the console. See also Hardware hacking Modchip Modding Overclocking USB decoration References External links Case Mod World Series 2020 CaseModGod - Mod it Til it Bleeds! Modders Inc Computer enclosure Computer hardware tuning Computing culture
1398059
https://en.wikipedia.org/wiki/Kingsoft
Kingsoft
Kingsoft Corporation () is a Chinese software company based in Beijing. The company was founded in 1988 by Qiu Bojun. Kingsoft operates four subsidiaries: Seasun for video game development, Cheetah Mobile for mobile internet apps, Kingsoft Cloud for cloud storage platforms, and WPS for office software, including WPS Office. It also produced security software as Kingsoft Security. The company is listed on the Hong Kong stock market. The company reached its prime during 2008–2010. However, later Bojun sold his 15.68% stake in Kingsoft to Tencent in July 2011. Kingsoft Cloud is a Cloud Computing company in China. It owns data centers in mainland China, Hong Kong, Russia, Southeast Asia, and North America. Its most popular game is JX Online 3, launched in 2009. References External links Chinese brands Chinese companies established in 1988 Software companies established in 1988 Companies based in Beijing Companies listed on the Hong Kong Stock Exchange Multinational companies headquartered in China Privately held companies of China Software companies of China Antivirus software
12507548
https://en.wikipedia.org/wiki/Advanced%20Learning%20and%20Research%20Institute
Advanced Learning and Research Institute
The Advanced Learning and Research Institute (ALaRI), a faculty of informatics, was established in 1999 at the University of Lugano (Università della Svizzera italiana, USI) with the mission of promoting research and education in embedded systems. The Faculty of Informatics within very few years has become one of the Switzerland major destinations for teaching and research, ranking third after the two Federal Institutes of Technology, Zurich and Lausanne. ALaRI offers the unique opportunity to obtain a master's degree in Cyber-Physical and Embedded Systems in cooperation with Politecnico di Milano and Federal Institute of Technology in Zurich (ETHZ). This newly designed master program is among the first in the world addressing the fast growing area of cyber-physical and embedded systems, i.e., systems and "hidden" computational devices directly interacting with the physical world. Just looking around we discover that cyber-physical and embedded systems are present at home, at work, in the environment itself, by providing the backbone technologies to design smart homes, buildings and cities, enable the internet of things, support smart energy production, management and metering, facilitate smart transportation and healthcare - and this is only a preliminary and very concise list! As an immediate consequence, the related industrial field is continuously growing with annual revenue in the order of trillion euros. Master of Science in Cyber-Physical and Embedded Systems The Master of Science in Cyber-Physical and Embedded Systems offers exclusive challenging opportunities to application designers and system developers, by integrating different areas such as microelectronics, physical modeling, computer science, machine learning, telecommunication and control, and focusing on the most advanced applications. Meeting the real need for an interdisciplinary approach, the teaching plan equips talented students with a unique body of knowledge in the area of cyber-physical and embedded systems. The educational model focuses on a system-level methodological perspective as well as on the development of interpersonal skills proven to be indispensable in today's industry, such as teamwork, marketing and management strategies. ALaRI research activities focus on topics of great scientific interest and industrial applicability, based on real-life design methodologies taking into account system properties such as performance, dependability, intelligence, security and energy efficiency. The program, designed for students holding a bachelor's degree in Computer Science, Computer Engineering and, more in general, in the domain of Information and Communication Technologies is built around three major methodological pillars: the interaction with the physical world, the embedded (networked) system, and the embedded applications. Courses, integrated to provide a holistic picture of the diversified facets, are given by world renowned, award-winning professors and industrial leaders. Both modular intensive and regular, semester-long courses are offered so that technological awareness, competences and problem solving abilities are built and developed together, within the same framework. These are the profiles and qualifications that industry and research are looking for. Classroom education is naturally complemented by hands-on laboratory experience so that methodological aspects are reflected in real-world environment. This approach has proven to be effective and successful, as endorsed by dozens of ALaRI Alumni, as it naturally facilitates the assimilation of deep technical concepts and core competences immediately usable in industry and academia. The study program of the Master of Science in Cyber-Physical and Embedded Systems consists of four full-time study semesters (120 ECTS over two years). The thesis starts during the third semester and completes by the end of the fourth. Each individual student is assisted in tailoring the teaching plan to his/her previous competences and specific interests. To broaden the student's perspective, up to 18 ECTS can be obtained with elective courses chosen from the program. Compulsory Courses Introduction to CPS & ES Overview of CPS and ES; Sensors; Actuators; Principles of metrology; Microcontrollers; Networking; Real time; Lab: on Arduino. Physical Modelling Mathematics for CPS: Linear algebra (recalls); Probability and statistics (recalls); ODE; Fourier series & transforms; DFT and FFT; Sampling theorem; Laplace transforms; Zeta transforms; Modelling the physical world: continuous and time discrete systems; Dynamics (constant of time and stability); Discretization methods; From theory to practice: modeling examples taken from electrical and physical systems. Microelectronics Integrated Circuits; Layout design; Design of CMOS cells; From gate to arithmetic circuit and register file; Low power design at the CMOS level; Design of MEMS sensors; Lab: cell design. Embedded Systems Architectures Summary of general-purpose architectures (recall). Focus on the ARM architecture; The coprocessor concept; Multiprocessor fundamentals; GPU architectures: basics, programming approaches. Lab on ARM with interrupt handling and design of a device driver. Software Engineering Principles of software engineering (for embedded systems); Requirements engineering; Testing, inspection and documentation; Software product lines; Component-based development; Software quality assurance; Software maintenance. Digital Signal Processing Linear filters; Design of IIR and FIR; Filter banks; Adaptive filters (LMS); Lab: Applications on filters; Moving a numerical computation to a DSP. Project Management & Leadership Project management; series of individual lectures delivered by recognized lectures. Nanosystems: Devices and design The synthesis and place-and-route chain; Nanosystems: Systems-on-Chip and Labs-on-Chip; Biosensors and nanosensors; Lab: hands on the design of a nanosystem (using VHDL and state-of-the art tools). Heterogeneous multicore architectures Design of heterogeneous multicore architectures; The Network-on-Chip concept; Architectural support to parallel execution; self-adaptation; Power management; Communication mechanisms; Management of multicore heterogeneous architectures. RT Systems OS (recalls); Tasks & threads (recalls); HW & SW I/O (recalls); Real-time computing; Real-time scheduling; Real-time kernels; Lab: hands on real-time systems. CPS Intelligence Dependability and Reliability; Fault detection, diagnosis and recovery; Coding techniques; Adaptation mechanisms in ES; Learning in a nonstationary environments; Cognitive fault diagnosis for CPS; Lab: adaptation and reliability in CPS. Cyber Communication Communication technologies and protocols for wired networks (e.g., CAN bus, Ethernet, USB, optical communication) and wireless networks (e.g., ZigBee, NFC, bluetooth, Wi-Fi). Lab: hands on selected technologies, e.g., Canbus and Zigbee. Digital Automation Controllers & stability issues; Design of discrete-time controllers; Lab:design a full controlled CPS system (if possible an experience in mechatronics). Reprogrammable Systems Advanced VHDL; Reprogrammable systems; FPGAs with complex blocks (processors, DSP); Radiation hard FPGAs; Reconfigurable FPGAs; Lab: hands on reconfigurable FPGAs. Specification Languages From application requirements to specifications; Models and techniques for system level specification; A top-down approach for specification refinement; Behavioural impact and cost of incomplete specifications; Lab: System C, from behavioral to RTL. Optimizing Embedded Applications Deterministic Vs. probabilistic approaches for complexity management; Randomized algorithms; Evolutionary optimization; Application porting to low precision platforms; Robustness analysis; Techniques for performance assessment at the application level. Multicore embedded applications design Strategies for designing a multicore application; Regular vs. irregular applications. Lab on heterogeneous multicore architectures (with GPUs). Physical computing Application design and integration of distributed embedded devices; Focus on short-range wireless networking; Mobile interfaces and embedded sensing; Remote sensing; Lab on the Arduino board. Cyber-security Introduction to cryptography; Symmetric and asymmetric algorithms; Key exchange; Digital signatures; HW & SW implementations. Validation and Verification Formal analysis for hardware & Sw validation. Trends and threats in Cyber-security Side-channels attacks; Malware; Quantum-security; Post-quantum algorithms; Hardware Trojans: Lab on breaking a secure device; Malware design. Intelligent systems Supervised and unsupervised learning; Features extraction and selection; Recurrent networks (RC, ESN); Convolutional neural Networks; Deep learning; Classification and regression real-world problems. Mobile Computing Data collection using mobile phones; Local and remote storage of sensor data (also on the cloud); Location sensing and estimation; User interfaces; Lab: hands on the design of a mobile applications with Android. Elective Courses Business & Entrepreneurship Business idea and Business plan, Business strategies, Product and price, Market communication, Sales and distributions, patent and protection of IPs. HW/SW Codesign HW/SW Codesign, lab on zynq board or on a softcore in FPGA. Future trends in computer architectures Superscalar, Vector, Multi-thread and multicore processors; future trends. Low -Power Design HW: Frequency and voltage scaling; power consumption minimization; tools for power optimization. Energy vs Power optimization. SW vs HW power optimization. SW: Sw strategies for designing energy aware applications. Human-Computer Interaction User-Centered Design Methodologies, Interfaces and Information Visualization Systems, Mobile App Design, Digital Fabrication. Trends and threats in Cyber-security Side-channels attacks; Malware; Quantum-security; Post-quantum algorithms, Hardware Trojans: Lab on breaking a device and on designing malware. Intelligent systems Mobile Computing References External links ALaRI homepage IEEE Student Branch University of Lugano Research institutes in Switzerland Schools in the canton of Ticino Research institutes established in 1999 1999 establishments in Switzerland
26524575
https://en.wikipedia.org/wiki/Distributed%20operating%20system
Distributed operating system
A distributed operating system is system software over a collection of independent, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications. The microkernel and the management components collection work together. They support the system's goal of integrating multiple resources and processing functionality into an efficient and stable system. This seamless integration of individual nodes into a global system is referred to as transparency, or single system image; describing the illusion provided to users of the global system's appearance as a single computational entity. Description A distributed OS provides the essential services and functionality required of an OS but adds attributes and particular configurations to allow it to support additional requirements such as increased scale and availability. To a user, a distributed OS works in a manner similar to a single-node, monolithic operating system. That is, although it consists of multiple nodes, it appears to users and applications as a single-node. Separating minimal system-level functionality from additional user-level modular services provides a "separation of mechanism and policy". Mechanism and policy can be simply interpreted as "what something is done" versus "how something is done," respectively. This separation increases flexibility and scalability. Overview The kernel At each locale (typically a node), the kernel provides a minimally complete set of node-level utilities necessary for operating a node's underlying hardware and resources. These mechanisms include allocation, management, and disposition of a node's resources, processes, communication, and input/output management support functions. Within the kernel, the communications sub-system is of foremost importance for a distributed OS. In a distributed OS, the kernel often supports a minimal set of functions, including low-level address space management, thread management, and inter-process communication (IPC). A kernel of this design is referred to as a microkernel. Its modular nature enhances reliability and security, essential features for a distributed OS. System management System management components are software processes that define the node's policies. These components are the part of the OS outside the kernel. These components provide higher-level communication, process and resource management, reliability, performance and security. The components match the functions of a single-entity system, adding the transparency required in a distributed environment. The distributed nature of the OS requires additional services to support a node's responsibilities to the global system. In addition, the system management components accept the "defensive" responsibilities of reliability, availability, and persistence. These responsibilities can conflict with each other. A consistent approach, balanced perspective, and a deep understanding of the overall system can assist in identifying diminishing returns. Separation of policy and mechanism mitigates such conflicts. Working together as an operating system The architecture and design of a distributed operating system must realize both individual node and global system goals. Architecture and design must be approached in a manner consistent with separating policy and mechanism. In doing so, a distributed operating system attempts to provide an efficient and reliable distributed computing framework allowing for an absolute minimal user awareness of the underlying command and control efforts. The multi-level collaboration between a kernel and the system management components, and in turn between the distinct nodes in a distributed operating system is the functional challenge of the distributed operating system. This is the point in the system that must maintain a perfect harmony of purpose, and simultaneously maintain a complete disconnect of intent from implementation. This challenge is the distributed operating system's opportunity to produce the foundation and framework for a reliable, efficient, available, robust, extensible, and scalable system. However, this opportunity comes at a very high cost in complexity. The price of complexity In a distributed operating system, the exceptional degree of inherent complexity could easily render the entire system an anathema to any user. As such, the logical price of realizing a distributed operation system must be calculated in terms of overcoming vast amounts of complexity in many areas, and on many levels. This calculation includes the depth, breadth, and range of design investment and architectural planning required in achieving even the most modest implementation. These design and development considerations are critical and unforgiving. For instance, a deep understanding of a distributed operating system's overall architectural and design detail is required at an exceptionally early point. An exhausting array of design considerations are inherent in the development of a distributed operating system. Each of these design considerations can potentially affect many of the others to a significant degree. This leads to a massive effort in balanced approach, in terms of the individual design considerations, and many of their permutations. As an aid in this effort, most rely on documented experience and research in distributed computing power. History Research and experimentation efforts began in earnest in the 1970s and continued through the 1990s, with focused interest peaking in the late 1980s. A number of distributed operating systems were introduced during this period; however, very few of these implementations achieved even modest commercial success. Fundamental and pioneering implementations of primitive distributed operating system component concepts date to the early 1950s. Some of these individual steps were not focused directly on distributed computing, and at the time, many may not have realized their important impact. These pioneering efforts laid important groundwork, and inspired continued research in areas related to distributed computing. In the mid-1970s, research produced important advances in distributed computing. These breakthroughs provided a solid, stable foundation for efforts that continued through the 1990s. The accelerating proliferation of multi-processor and multi-core processor systems research led to a resurgence of the distributed OS concept. 1950s The DYSEAC One of the first efforts was the DYSEAC, a general-purpose synchronous computer. In one of the earliest publications of the Association for Computing Machinery, in April 1954, a researcher at the National Bureau of Standards now the National Institute of Standards and Technology (NIST) presented a detailed specification of the DYSEAC. The introduction focused upon the requirements of the intended applications, including flexible communications, but also mentioned other computers: The specification discussed the architecture of multi-computer systems, preferring peer-to-peer rather than master-slave. This is one of the earliest examples of a computer with distributed control. The Dept. of the Army reports certified it reliable and that it passed all acceptance tests in April 1954. It was completed and delivered on time, in May 1954. This was a "portable computer", housed in a tractor-trailer, with 2 attendant vehicles and 6 tons of refrigeration capacity. Lincoln TX-2 Described as an experimental input-output system, the Lincoln TX-2 emphasized flexible, simultaneously operational input-output devices, i.e., multiprogramming. The design of the TX-2 was modular, supporting a high degree of modification and expansion. The system employed The Multiple-Sequence Program Technique. This technique allowed multiple program counters to each associate with one of 32 possible sequences of program code. These explicitly prioritized sequences could be interleaved and executed concurrently, affecting not only the computation in process, but also the control flow of sequences and switching of devices as well. Much discussion related to device sequencing. Similar to DYSEAC the TX-2 separately programmed devices can operate simultaneously, increasing throughput. The full power of the central unit was available to any device. The TX-2 was another example of a system exhibiting distributed control, its central unit not having dedicated control. Intercommunicating Cells One early effort at abstracting memory access was Intercommunicating Cells, where a cell was composed of a collection of memory elements. A memory element was basically a binary electronic flip-flop or relay. Within a cell there were two types of elements, symbol and cell. Each cell structure stores data in a string of symbols, consisting of a name and a set of parameters. Information is linked through cell associations. The theory contended that addressing is a wasteful and non-valuable level of indirection. Information was accessed in two ways, direct and cross-retrieval. Direct retrieval accepts a name and returns a parameter set. Cross-retrieval projects through parameter sets and returns a set of names containing the given subset of parameters. This was similar to a modified hash table data structure that allowed multiple values (parameters) for each key (name). This configuration was ideal for distributed systems. The constant-time projection through memory for storing and retrieval was inherently atomic and exclusive. The cellular memory's intrinsic distributed characteristics would be invaluable. The impact on the user, hardware/device, or Application programming interfaces was indirect. The authors were considering distributed systems, stating: Foundational work Coherent memory abstraction Algorithms for scalable synchronization on shared-memory multiprocessors File System abstraction Measurements of a distributed file system Memory coherence in shared virtual memory systems Transaction abstraction Transactions Sagas Transactional Memory Composable memory transactions Transactional memory: architectural support for lock-free data structures Software transactional memory for dynamic-sized data structures Software transactional memory Persistence abstraction OceanStore: an architecture for global-scale persistent storage Coordinator abstraction Weighted voting for replicated data Consensus in the presence of partial synchrony Reliability abstraction Sanity checks The Byzantine Generals Problem Fail-stop processors: an approach to designing fault-tolerant computing systems Recoverability Distributed snapshots: determining global states of distributed systems Optimistic recovery in distributed systems Distributed computing models Three basic distributions To better illustrate this point, examine three system architectures; centralized, decentralized, and distributed. In this examination, consider three structural aspects: organization, connection, and control. Organization describes a system's physical arrangement characteristics. Connection covers the communication pathways among nodes. Control manages the operation of the earlier two considerations. Organization A centralized system has one level of structure, where all constituent elements directly depend upon a single control element. A decentralized system is hierarchical. The bottom level unites subsets of a system's entities. These entity subsets in turn combine at higher levels, ultimately culminating at a central master element. A distributed system is a collection of autonomous elements with no concept of levels. Connection Centralized systems connect constituents directly to a central master entity in a hub and spoke fashion. A decentralized system (aka network system) incorporates direct and indirect paths between constituent elements and the central entity. Typically this is configured as a hierarchy with only one shortest path between any two elements. Finally, the distributed operating system requires no pattern; direct and indirect connections are possible between any two elements. Consider the 1970s phenomena of “string art” or a spirograph drawing as a fully connected system, and the spider's web or the Interstate Highway System between U.S. cities as examples of a partially connected system. Control Centralized and decentralized systems have directed flows of connection to and from the central entity, while distributed systems communicate along arbitrary paths. This is the pivotal notion of the third consideration. Control involves allocating tasks and data to system elements balancing efficiency, responsiveness, and complexity. Centralized and decentralized systems offer more control, potentially easing administration by limiting options. Distributed systems are more difficult to explicitly control, but scale better horizontally and offer fewer points of system-wide failure. The associations conform to the needs imposed by its design but not by organizational chaos Design considerations Transparency Transparency or single-system image refers to the ability of an application to treat the system on which it operates without regard to whether it is distributed and without regard to hardware or other implementation details. Many areas of a system can benefit from transparency, including access, location, performance, naming, and migration. The consideration of transparency directly affects decision making in every aspect of design of a distributed operating system. Transparency can impose certain requirements and/or restrictions on other design considerations. Systems can optionally violate transparency to varying degrees to meet specific application requirements. For example, a distributed operating system may present a hard drive on one computer as "C:" and a drive on another computer as "G:". The user does not require any knowledge of device drivers or the drive's location; both devices work the same way, from the application's perspective. A less transparent interface might require the application to know which computer hosts the drive. Transparency domains: Location transparency – Location transparency comprises two distinct aspects of transparency, naming transparency and user mobility. Naming transparency requires that nothing in the physical or logical references to any system entity should expose any indication of the entity's location, or its local or remote relationship to the user or application. User mobility requires the consistent referencing of system entities, regardless of the system location from which the reference originates. Access transparency – Local and remote system entities must remain indistinguishable when viewed through the user interface. The distributed operating system maintains this perception through the exposure of a single access mechanism for a system entity, regardless of that entity being local or remote to the user. Transparency dictates that any differences in methods of accessing any particular system entity—either local or remote—must be both invisible to, and undetectable by the user. Migration transparency – Resources and activities migrate from one element to another controlled solely by the system and without user/application knowledge or action. Replication transparency – The process or fact that a resource has been duplicated on another element occurs under system control and without user/application knowledge or intervention. Concurrency transparency – Users/applications are unaware of and unaffected by the presence/activities of other users. Failure transparency – The system is responsible for detection and remediation of system failures. No user knowledge/action is involved other than waiting for the system to resolve the problem. Performance Transparency – The system is responsible for the detection and remediation of local or global performance shortfalls. Note that system policies may prefer some users/user classes/tasks over others. No user knowledge or interaction. is involved. Size/Scale transparency – The system is responsible for managing its geographic reach, number of nodes, level of node capability without any required user knowledge or interaction. Revision transparency – The system is responsible for upgrades and revisions and changes to system infrastructure without user knowledge or action. Control transparency – The system is responsible for providing all system information, constants, properties, configuration settings, etc. in a consistent appearance, connotation, and denotation to all users and applications. Data transparency – The system is responsible for providing data to applications without user knowledge or action relating to where the system stores it. Parallelism transparency – The system is responsible for exploiting any ability to parallelize task execution without user knowledge or interaction. Arguably the most difficult aspect of transparency, and described by Tanenbaum as the "Holy grail" for distributed system designers. Inter-process communication Inter-Process Communication (IPC) is the implementation of general communication, process interaction, and dataflow between threads and/or processes both within a node, and between nodes in a distributed OS. The intra-node and inter-node communication requirements drive low-level IPC design, which is the typical approach to implementing communication functions that support transparency. In this sense, Interprocess communication is the greatest underlying concept in the low-level design considerations of a distributed operating system. Process management Process management provides policies and mechanisms for effective and efficient sharing of resources between distributed processes. These policies and mechanisms support operations involving the allocation and de-allocation of processes and ports to processors, as well as mechanisms to run, suspend, migrate, halt, or resume process execution. While these resources and operations can be either local or remote with respect to each other, the distributed OS maintains state and synchronization over all processes in the system. As an example, load balancing is a common process management function. Load balancing monitors node performance and is responsible for shifting activity across nodes when the system is out of balance. One load balancing function is picking a process to move. The kernel may employ several selection mechanisms, including priority-based choice. This mechanism chooses a process based on a policy such as 'newest request'. The system implements the policy Resource management Systems resources such as memory, files, devices, etc. are distributed throughout a system, and at any given moment, any of these nodes may have light to idle workloads. Load sharing and load balancing require many policy-oriented decisions, ranging from finding idle CPUs, when to move, and which to move. Many algorithms exist to aid in these decisions; however, this calls for a second level of decision making policy in choosing the algorithm best suited for the scenario, and the conditions surrounding the scenario. Reliability Distributed OS can provide the necessary resources and services to achieve high levels of reliability, or the ability to prevent and/or recover from errors. Faults are physical or logical defects that can cause errors in the system. For a system to be reliable, it must somehow overcome the adverse effects of faults. The primary methods for dealing with faults include fault avoidance, fault tolerance, and fault detection and recovery. Fault avoidance covers proactive measures taken to minimize the occurrence of faults. These proactive measures can be in the form of transactions, replication and backups. Fault tolerance is the ability of a system to continue operation in the presence of a fault. In the event, the system should detect and recover full functionality. In any event, any actions taken should make every effort to preserve the single system image. Availability Availability is the fraction of time during which the system can respond to requests. Performance Many benchmark metrics quantify performance; throughput, response time, job completions per unit time, system utilization, etc. With respect to a distributed OS, performance most often distills to a balance between process parallelism and IPC. Managing the task granularity of parallelism in a sensible relation to the messages required for support is extremely effective. Also, identifying when it is more beneficial to migrate a process to its data, rather than copy the data, is effective as well. Synchronization Cooperating concurrent processes have an inherent need for synchronization, which ensures that changes happen in a correct and predictable fashion. Three basic situations that define the scope of this need: one or more processes must synchronize at a given point for one or more other processes to continue, one or more processes must wait for an asynchronous condition in order to continue, or a process must establish exclusive access to a shared resource. Improper synchronization can lead to multiple failure modes including loss of atomicity, consistency, isolation and durability, deadlock, livelock and loss of serializability. Flexibility Flexibility in a distributed operating system is enhanced through the modular characteristics of the distributed OS, and by providing a richer set of higher-level services. The completeness and quality of the kernel/microkernel simplifies implementation of such services, and potentially enables service providers greater choice of providers for such services. Research Replicated model extended to a component object model Architectural Design of E1 Distributed Operating System The Cronus distributed operating system Design and development of MINIX distributed operating system Complexity/Trust exposure through accepted responsibility Scale and performance in the Denali isolation kernel. Multi/Many-core focused systems The multikernel: a new OS architecture for scalable multicore systems. Corey: an Operating System for Many Cores. Almos: Advanced Locality Management Operating System for cc-NUMA Many-Cores. Distributed processing over extremes in heterogeneity Helios: heterogeneous multiprocessing with satellite kernels. Effective and stable in multiple levels of complexity Tessellation: Space-Time Partitioning in a Manycore Client OS. See also Distributed computing Plan 9 from Bell Labs Inferno MINIX Single system image (SSI) Computer systems architecture Multikernel List of important publications in concurrent, parallel, and distributed computing Operating System Projects Edsger W. Dijkstra Prize in Distributed Computing List of distributed computing conferences List of distributed computing projects References Further reading External links MIT Parallel and Distributed Operating System Laboratory UCB parallel computing laboratory Parallel Data laboratory The distributed environment of Plan 9 E1 Distributed Operating System Amoeba DOS Source Amoeba home page USENIX: Advanced Computing association How Stuff Works - Operating Systems Algorithms for scalable synchronization Computer networks History of software Operating systems
40120621
https://en.wikipedia.org/wiki/Satellite%20Reign
Satellite Reign
Satellite Reign is a cyberpunk real-time tactics video game that was released in August 2015 for Windows, macOS, and Linux. The game was developed by Australia's 5 Lives Studios and is a spiritual successor to the Syndicate series, which co-founder and programmer Mike Diskett had worked on. The name derived from one of the weapons featured in Syndicate Wars called "Satellite Rain." Development After the release of the Syndicate reboot in 2012, the Syndicate Wars producer and lead programmer, Mike Diskett, expressed his displeasure over the game's lack of similarity to previous entries in the series:"A lot of Syndicate fans, including myself, got our hopes up when we heard a new Syndicate game was in development, but in the end it turned out to be nothing like the original games."This encouraged him to create the Brisbane-based 5 Lives Studios (which included team members who had worked on other video game series such as Grand Theft Auto, L.A. Noire, and Darksiders) and develop a spiritual successor to the game he made back in 1993:"I've decided to take it upon myself to deliver what the fans really wanted."Unlike the reboot, which was a first-person shooter, 5 Lives Studios tried to build upon the real-time tactics gameplay experience found in the original games by introducing a fixed camera system and character classes. On June 28, 2013, Satellite Reign was announced on crowdfunding website Kickstarter with a (US$546,875) goal. On July 29, 2013, the project was successfully funded with (US$720,832) raised. The game was released for Windows, macOS, and Linux on August 28, 2015. A post-launch update introducing four-player cooperative gameplay was released in July 2016. The soundtrack was composed by Russell Shaw, who worked on both Syndicate and Syndicate Wars. Plot Satellite Reign takes place in an unnamed fictional city referred to as "The City". A corporation named Dracogenics rises to prominence after releasing a prototype named Resurrection-Tech that is capable of providing immortality. Dracogenics turns to corporate crime and bribes politicians with immortality in exchange for control and influence. This eventually leads to the privatization of The City's services and full corporate control over The City's police force. Civil unrest arises and is suppressed by Dracogenics, but a rival corporation soon comes to light and launches anti-Dracogenics attacks. The player controls this rival corporation and must overthrow Dracogenics from The City. Reception Satellite Reign has a score of 75 out of 100 on Metacritic indicating "generally favorable reviews". References External links 2015 video games Crowdfunded video games Cyberpunk video games Kickstarter-funded video games Linux games Open-world video games Organized crime video games MacOS games Real-time tactics video games Science fiction video games Steam Greenlight games Video games developed in Australia Windows games
31442421
https://en.wikipedia.org/wiki/Commodore%20USA
Commodore USA
Commodore USA, LLC was a computer company based in Pompano Beach, Florida, with additional facilities in Fort Lauderdale, Florida. Commodore USA, LLC was founded in April 2010. The company's goal was to sell a new line of PCs using the classic Commodore and Amiga name brands of personal computers, having licensed the Commodore brand from Commodore Licensing BV on August 25, 2010 and the Amiga brand from Amiga, Inc. on August 31, 2010. The Amiga brand license was however disputed by Hyperion Entertainment, on the basis of a 2009 settlement agreement between Hyperion and Amiga. The last news release from the website is dated March 21, 2012. In January 2013, it was revealed that founder and driving force Barry S. Altman died of cancer on December 8, 2012. The last post on Commodore USA's forum came from Leo Nigro (Chief Technical Officer) on the 9th of December concerning the Amiga line. Products Phoenix The Commodore Phoenix was a keyboard computer resembling an updated style of the Commodore 64. It was originally designed and manufactured by Cybernet as a space-saving workstation. Commodore 64x The flagship product for Commodore USA, named the Commodore 64x, was contained in a partially redesigned and updated Commodore 64 form factor. The machine looked like the original Commodore 64, except with a slightly updated keyboard and power supply. The base model has an Intel Atom processor and an NVIDIA Ion 2 graphics card. The top version released on August 13, 2011 was called the "C64x Extreme" and featured an Intel Core i7 CPU with 8 GB RAM and 3 TB hard drive using the Intel Sandy Bridge chipset. There was also a barebones version of the C64x shell without a motherboard, power supply, or optical drive or hard drive, that was meant to encourage hobbyist enthusiasts to install their preferred Mini-ATX motherboard. VIC The revamped Vic product line is a group of keyboard computers with original Commodore function keys. The Vic Slim had a keyboard that was the same size as most extended keyboards, but used a relatively slow Intel Atom CPU. The Vic Pro was a keyboard computer that also contained a built-in touchpad, memory card reader, and two fans. Amiga A product line of Amiga branded x86 computers based upon the Intel i7 chipset featuring emulation of the classic Amiga systems built-in. The only available Amiga sold so far is the Amiga Mini which was a barebone computer. The Amiga Mio was offered as a refresh of the Amiga Mini, but was discontinued on November 4, 2013. Commodore OS As of November 11, 2011, Commodore USA has released a beta version of Commodore OS, a Linux Mint-based operating system to be used throughout its product range. It is a media center operating system, bundled with a variety of free open source software. The full version of this beta operating system is available only systems purchased from Commodore USA. It does support emulation of some of the previous Commodore operating systems. History Commodore USA consistently focused on bundling an alternate operating system, preferring Linux. It previously claimed that their machines support every operating system available from Ubuntu specifically, to Windows and even OSx86, but disclaiming that they do not and will not sell Mac OS X. Commodore USA's online store sold Microsoft Windows separately and bundled Linux in their keyboard computers. Later, Commodore USA announced that they would officially support, develop, and ship their computers with AROS, but shifted their focus on redesigning Linux as Amiga Workbench 5, and Amiga Workbench X, but decided to name it Commodore OS and dropped all plans of making it resemble an Amiga-like operating system due to additional legal proceedings. Examples of announced products that appear to have been cancelled are Invictus and Amigo. The Commodore USA website was redesigned and an interactive forum was launched at the same time. High-end Amiga-PC designs were posted on the website. The company licensed the Commodore brand from Commodore Licensing, BV on August 25, 2010. It licensed the Amiga brand from Amiga, Inc. shortly afterwards on August 31. Barry Altman, founder of Commodore USA, died on December 8, 2012. Controversy Commodore USA has been criticized for altering previously announced plans, threatening legal action against an OS News writer's article, and mistakenly attempting to obtain licensing from a Commodore licensee unauthorised to sublicense. Commodore USA was alleged to have used various images, artwork, and designs without the permission of the original authors. Apparently they chose to do so in some cases because they could not contact the creators to ask permission. Further controversy surrounding the company's image use policy revolved around alleged photographs of the C64x assembly line in China, revealed to have been old promotional images for a facility in Augsburg owned by Fujitsu. Some of Commodore USA's announced products were cancelled since their announcement due to intellectual property disagreements, most notably concerning the rights of licensor Amiga Inc. with regards to the possible use of AROS in future Amiga systems from Commodore USA. Others have simply been cancelled as the business plan evolves away from their sector of the market. Reception Lance Ulanoff writing in PCMag criticized the new Commodore 64 as a "none-too-cheap imitation of the real thing", criticizing it for using modern components. Commodore USA has responded to this position by pointing out the high cost of researching and developing original chipsets, and the relative expense and lack of mass-market software support for other CPU ISAs such as Power ISA or Motorola 68000 family. Commodore USA attempted to address these concerns by announcing Commodore OS, intended to be released with Commodore USA systems. Their new Amiga product line is not compatible with original Commodore Amiga systems including the operating system, AmigaOS which is in fact developed by a separate company. Commodore USA originally intended to develop an AROS to be bundled with their Amiga systems, however this plan was later publicly discarded by CEO Barry Altman. References External links Company Forum - Defunct Company website - Defunct Personal computers Amiga companies American companies established in 2010 American companies disestablished in 2013
47801906
https://en.wikipedia.org/wiki/Digital%20Impact%20Awards%20Africa
Digital Impact Awards Africa
Digital Impact Awards Africa (DIAA) is a platform that promotes Digital inclusion, financial inclusion and Cybersecurity under the theme Maximizing the Digital Dividend. The Awards seek to recognize and appreciate different organizations that are spearheading the use of digital media in this respect. Digital Impact Awards Africa is organized by HiPipo Eligibility To be considered, nominees must have substantially contributed to digital space in Africa. Entries should be offering innovative, useful or engaging digital (web, mobile, social media) content, applications, services or utilities including digital financial services with good cybersecurity practices. Nominees may be of companies (Corporate/SMEs), nonprofit organizations, digital applications, projects, platforms and promotions. The scope of eligible organisations excludes media houses. The project covers 3 main domains: Digital Inclusion, Financial Inclusion and Cybersecurity. The International Telecommunication Union (ITU), the United Nations specialized agency for Information and Communication Technologies- ICTs; listed Digital Impact Awards Africa among the ICT projects and events that celebrated ITU's 150th Anniversary ITU Listing. Jury and Research Panel voting Submission for nomination is evaluated and decided by the Research Panel whereas final awards winners are decided by the Jury Panel and Public Vote. The Awards Jury and Research Panel comprise people with extensive knowledge and experience in ICT roles such as entrepreneurs, innovators, academic, consultants, policy makers and thought-leaders. Award winners The winners of the 2nd Digital Impact Awards Africa were: Best Payments/Transfers Service: Payway Best Online/Mobile Banking Service: Standard Chartered Bank (Uganda) Best Mobile Money Service: MTN Mobile Money Best Government Agency on Social Media: National Water and Sewerage Corporation (NWSC) Best Corporate Brand on Social Media: MTN Uganda Most Promising Social Media Presence: Crown Beverages Limited (Pepsi, Mirinda, Mountain Dew) Best E-Commerce (Classifieds / Marketplace): Cheki Best E-Commerce (Store/Service): HelloFood Best E-Service: UMEME Best Mobile App: Kaymu Best Mobile App for Africa: Vodacom My App (South Africa) Best Digital Marketing Campaign: Aritel Trace Music Star Best Digital Customer Service: National Water and Sewerage Corporation (NWSC) COMMENDED New Website: PostBank Uganda Best Corporate Website: Standard Chartered Bank (Uganda) Best Cybersecurity Practice: Standard Chartered Bank (Uganda) Best Digital Inclusion Impact: Airtel Uganda COMMENDED Financial Inclusion Impact: Pride MicroFinance Ltd Best Financial Inclusion Impact: Centenary Bank Uganda Digital Brand of the Year MTN Uganda Award Winners 2016 Africa Medal of Honor Africa's Financial Inclusion Medal of Honor Mr. Michael Joseph Dr. Nick Hughes Ms. Susie Lonie Africa's Digital Inclusion Medal of Honor: Mr. Ren Zhengfei AWARD of Excellence Digital Brand of the year – MTN Best Digital Bank for Africa - Standard Chartered Bank Best Mobile Financial Service for Africa - Safaricom/Vodafone M-Pesa Best Mobile Financial Service Platform for Africa - Mahindra Comviva Mobiquity Best Digital Remittances Service for Africa – Money Gram Best Digital Payments Service For Africa – Payway Best Cybersecurity Practice by Bank in Africa - First National Bank Best Smartphone Brand for Africa – Huawei Best Mobile App Innovation for Africa - My Vodacom App Best E-Commerce Service for Africa – Jumia Best Connectivity Initiative for Africa - Google Project Link (Wi-Fi with Roke Telecom) Best Pay TV Service for Africa – DSTV Best Digital Marketing Campaign for Africa - Coke Studio Africa COMMENDATIONS Transformative Mobile Banking Platform for Africa - RedCloud One Platform Interoperable Digital Payment Enablement for Africa – InterSwitch Award Winners 2016 Uganda AWARD of Excellence Best Brand on Social Media - Centenary Bank Best Corporate Website - Stanbic Bank – Uganda Best Cybersecurity Practice by Corporate - Standard Chartered Bank Best Digital Customer Service – NWSC Best E-Service - Umeme (E-Pay Yaka and Mobile App) Best Financial Inclusion Initiative - Airtel Weza Best Government Agency on Social Media – KCCA Best Mobile App - MyMTN App Best Mobile Banking - Centenary Bank Best Mobile Financial Service - MTN Mobile Money Best Online Banking - Standard Chartered Bank Most Promising Social Media Embrace - Stanbic Bank Certificate of Commendation; Use of Digital for Travel - Modern Coast Express Use of Digital for Engineering & Manufacturing - Movit Products Limited Use of Digital for Foods & Beverage - Hariss International (Riham, Rockboom) Use of Digital for Energy - Total Uganda Limited Use of Digital for Agriculture & Agro Processing - Jesa Farm Dairy Limited Use of Digital for Insurance – UAP Use of Digital for Healthcare - Vine Pharmaceuticals Ltd Use of Digital for Real Estate & Construction - National Housing and Construction Company Use of Digital for Retail & Distribution - Footsteps Furniture Use of Digital for Hospitality - Kampala Serena Hotel Use of Digital for Technology - Yo Dime Use of Digital for Education – UTAMU Use of Digital for Tourism - Great Lakes Safari Limited Use of Digital for Finance and Credit - Mercantile Credit Bank Cybersecurity Practice by Government Institution – Bank of Uganda Championing Digital in Creative Arts Industry – Mr. Moses Ssali See also Africa Digital Awards References African music awards
252507
https://en.wikipedia.org/wiki/American%20frontier
American frontier
The American frontier, also known as the Old West or the Wild West, includes the geography, history, folklore, and culture in the forward wave of American expansion in mainland North America that began with European colonial settlements in the early 17th century and ended with the admission of the last few western territories as states in 1912 (except Alaska, which was not admitted into the Union until 1959). This era of massive migration and settlement was particularly encouraged by President Thomas Jefferson following the Louisiana Purchase, giving rise to the expansionist attitude known as "Manifest Destiny" and the historians' "Frontier Thesis". The legends, historical events and folklore of the American frontier have embedded themselves into United States culture so much so that the Old West, and the Western genre of media specifically, has become one of the defining periods of American national identity. The archetypical Old West period is generally accepted by historians to have occurred between the end of the American Civil War in 1865 until the closing of the Frontier by the Census Bureau in 1890. By 1890, settlement in the American West had reached sufficient population density that the frontier line had disappeared; in 1890 the Census Bureau released a bulletin declaring the closing of the frontier, stating: "Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports." A frontier is a zone of contact at the edge of a line of settlement. Leading theorist Frederick Jackson Turner went deeper, arguing that the frontier was the scene of a defining process of American civilization: "The frontier," he asserted, "promoted the formation of a composite nationality for the American people." He theorized it was a process of development: "This perennial rebirth, this fluidity of American life, this expansion westward...furnish[es] the forces dominating American character." Turner's ideas since 1893 have inspired generations of historians (and critics) to explore multiple individual American frontiers, but the popular folk frontier concentrates on the conquest and settlement of Native American lands west of the Mississippi River, in what is now the Midwest, Texas, the Great Plains, the Rocky Mountains, the Southwest, and the West Coast. Enormous popular attention was focused on the Western United States (especially the Southwest) in the second half of the 19th century and the early 20th century, from the 1850s to the 1910s. Such media typically exaggerated the romance, anarchy, and chaotic violence of the period for greater dramatic effect. This inspired the Western genre of film, along with television shows, novels, comic books, video games, children's toys and costumes. As defined by Hine and Faragher, "frontier history tells the story of the creation and defense of communities, the use of the land, the development of markets, and the formation of states." They explain, "It is a tale of conquest, but also one of survival, persistence, and the merging of peoples and cultures that gave birth and continuing life to America." Turner himself repeatedly emphasized how the availability of free land to start new farms attracted pioneering Americans: "The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development." Through treaties with foreign nations and native tribes, political compromise, military conquest, the establishment of law and order, the building of farms, ranches, and towns, the marking of trails and digging of mines, and the pulling in of great migrations of foreigners, the United States expanded from coast to coast, fulfilling the ideology of Manifest destiny. In his "Frontier Thesis" (1893), Turner theorized that the frontier was a process that transformed Europeans into a new people, the Americans, whose values focused on equality, democracy, and optimism, as well as individualism, self-reliance, and even violence. As the American frontier passed into history, the myths of the West in fiction and film took a firm hold in the imaginations of Americans and foreigners alike. In David Murdoch's view, America is exceptional in choosing its iconic self-image: "No other nation has taken a time and place from its past and produced a construct of the imagination equal to America's creation of the West." Terms West and frontier The frontier is the margin of undeveloped territory that would comprise the United States beyond the established frontier line. The U.S. Census Bureau designated frontier territory as generally unoccupied land with a population density of fewer than 2 people per square mile (0.77 people per square kilometer). The frontier line was the outer boundary of European-American settlement into this land. Beginning with the first permanent European settlements on the East Coast, it has moved steadily westward from the 1600s to the 1900s (decades) with occasional movements north into Maine and Vermont, south into Florida, and east from California into Nevada. Pockets of settlements would also appear far past the established frontier line, particularly on the West Coast and the deep interior with settlements such as Los Angeles and Salt Lake City respectively. The "West" was the recently settled area near that boundary. Thus, parts of the Midwest and American South, though no longer considered "western", have a frontier heritage along with the modern western states. Richard W. Slatta, in his view of the frontier, writes that "historians sometimes define the American West as lands west of the 98th meridian or 98° west longitude," and that other definitions of the region "include all lands west of the Mississippi or Missouri rivers." Maps of United States territories Key: History Colonial frontier In the colonial era, before 1776, the west was of high priority for settlers and politicians. The American frontier began when Jamestown, Virginia, was settled by the English in 1607. In the earliest days of European settlement on the Atlantic coast, until about 1680, the frontier was essentially any part of the interior of the continent beyond the fringe of existing settlements along the Atlantic coast. English, French, Spanish and Dutch patterns of expansion and settlement were quite different. Only a few thousand French migrated to Canada; these habitants settled in villages along the St. Lawrence River, building communities that remained stable for long stretches. Although French fur traders ranged widely through the Great Lakes and midwest region they seldom settled down. French settlement was limited to a few very small villages such as Kaskaskia, Illinois as well as a larger settlement around New Orleans. In what is now New York state the Dutch set up fur trading posts in the Hudson River valley, followed by large grants of land to rich landowning patroons who brought in tenant farmers who created compact, permanent villages. They created a dense rural settlement in upstate New York, but they did not push westward. Areas in the north that were in the frontier stage by 1700 generally had poor transportation facilities, so the opportunity for commercial agriculture was low. These areas remained primarily in subsistence agriculture, and as a result, by the 1760s these societies were highly egalitarian, as explained by historian Jackson Turner Main: In the South, frontier areas that lacked transportation, such as the Appalachian Mountains region, remained based on subsistence farming and resembled the egalitarianism of their northern counterparts, although they had a larger upper-class of slaveowners. North Carolina was representative. However, frontier areas of 1700 that had good river connections were increasingly transformed into plantation agriculture. Rich men came in, bought up the good land, and worked it with slaves. The area was no longer "frontier". It had a stratified society comprising a powerful upper-class white landowning gentry, a small middle-class, a fairly large group of landless or tenant white farmers, and a growing slave population at the bottom of the social pyramid. Unlike the North, where small towns and even cities were common, the South was overwhelmingly rural. From British peasants to American farmers The seaboard colonial settlements gave priority to land ownership for individual farmers, and as the population grew they pushed westward for fresh farmland. Unlike Britain, where a small number of landlords owned most of the land, ownership in America was cheap, easy and widespread. Land ownership brought a degree of independence as well as a vote for local and provincial offices. The typical New England settlements were quite compact and small, under a square mile. Conflict with the Native Americans arose out of political issues, namely who would rule. Early frontier areas east of the Appalachian Mountains included the Connecticut River valley, and northern New England (which was a move to the north, not the west). Wars with French and with Natives Most of the frontiers experienced numerous conflicts. The French and Indian War broke out between Britain and France, with the French making up for their small colonial population base by enlisting Indian war parties as allies. The series of large wars spilling over from European wars ended in a complete victory for the British in the worldwide Seven Years' War. In the peace treaty of 1763, France ceded practically everything, as the lands west of the Mississippi River, in addition to Florida and New Orleans, went to Spain. Otherwise, lands east of the Mississippi River and what is now Canada went to Britain. Steady migration to frontier lands Regardless of wars Americans were moving across the Appalachians into western Pennsylvania, what is now West Virginia, and areas of the Ohio Country, Kentucky, and Tennessee. In the southern settlements via the Cumberland Gap, their most famous leader was Daniel Boone. Young George Washington promoted settlements in West Virginia on lands awarded to him and his soldiers by the Royal government in payment for their wartime service in Virginia's militia. Settlements west of the Appalachian Mountains were curtailed briefly by the Royal Proclamation of 1763, forbidding settlement in this area. Treaty of Fort Stanwix (1768) re-opened most of the western lands for frontiersmen to settle. New nation The nation was at peace after 1783. The states gave Congress control of the western lands and an effective system for population expansion was developed. The Northwest Ordinance of 1787 abolished slavery in the area north of the Ohio River and promised statehood when a territory reached a threshold population, as Ohio did in 1803. The first major movement west of the Appalachian mountains originated in Pennsylvania, Virginia, and North Carolina as soon as the Revolutionary War ended in 1781. Pioneers housed themselves in a rough lean-to or at most a one-room log cabin. The main food supply at first came from hunting deer, turkeys, and other abundant game. In a few years, the pioneer added hogs, sheep, and cattle, and perhaps acquired a horse. Homespun clothing replaced the animal skins. The more restless pioneers grew dissatisfied with over civilized life and uprooted themselves again to move 50 or a hundred miles (80 or 160 km) further west. Land policy The land policy of the new nation was conservative, paying special attention to the needs of the settled East. The goals sought by both parties in the 1790–1820 era were to grow the economy, avoid draining away the skilled workers needed in the East, distribute the land wisely, sell it at prices that were reasonable to settlers yet high enough to pay off the national debt, clear legal titles, and create a diversified Western economy that would be closely interconnected with the settled areas with minimal risk of a breakaway movement. By the 1830s, however, the West was filling up with squatters who had no legal deed, although they may have paid money to previous settlers. The Jacksonian Democrats favored the squatters by promising rapid access to cheap land. By contrast, Henry Clay was alarmed at the "lawless rabble" heading West who were undermining the utopian concept of a law-abiding, stable middle-class republican community. Rich southerners, meanwhile, looked for opportunities to buy high-quality land to set up slave plantations. The Free Soil movement of the 1840s called for low-cost land for free white farmers, a position enacted into law by the new Republican Party in 1862, offering free 160 acres (65 ha) homesteads to all adults, male and female, black and white, native-born or immigrant. After winning the Revolutionary War (1783), American settlers in large numbers poured into the west. In 1788, American pioneers to the Northwest Territory established Marietta, Ohio, as the first permanent American settlement in the Northwest Territory. In 1775, Daniel Boone blazed a trail for the Transylvania Company from Virginia through the Cumberland Gap into central Kentucky. It was later lengthened to reach the Falls of the Ohio at Louisville. The Wilderness Road was steep and rough, and it could only be traversed on foot or horseback, but it was the best route for thousands of settlers moving into Kentucky. In some areas they had to face Indian attacks. In 1784 alone, Indians killed over 100 travelers on the Wilderness Road. Kentucky at this time had been depopulated—it was "empty of Indian villages." However raiding parties sometimes came through. One of those intercepted was Abraham Lincoln's grandfather, who was scalped in 1784 near Louisville. Acquisition of indigenous lands The War of 1812 marked the final confrontation involving major British and Indian forces fighting to stop American expansion. The British war goal included the creation of an Indian barrier state under British auspices in the Midwest which would halt American expansion westward. American frontier militiamen under General Andrew Jackson defeated the Creeks and opened the Southwest, while militia under Governor William Henry Harrison defeated the Indian-British alliance at the Battle of the Thames in Canada in 1813. The death in battle of the Indian leader Tecumseh dissolved the coalition of hostile Indian tribes. Meanwhile, General Andrew Jackson ended the Indian military threat in the Southeast at the Battle of Horseshoe Bend in 1814 in Alabama. In general, the frontiersmen battled the Indians with little help from the U.S. Army or the federal government. To end the war, American diplomats negotiated the Treaty of Ghent, signed towards the end of 1814, with Britain. They rejected the British plan to set up an Indian state in U.S. territory south of the Great Lakes. They explained the American policy toward the acquisition of Indian lands: New territories and states As settlers poured in, the frontier districts first became territories, with an elected legislature and a governor appointed by the president. Then when the population reached 100,000 the territory applied for statehood. Frontiersmen typically dropped the legalistic formalities and restrictive franchise favored by eastern upper classes and adopting more democracy and more egalitarianism. In 1810 the western frontier had reached the Mississippi River. St. Louis, Missouri, was the largest town on the frontier, the gateway for travel westward, and a principal trading center for Mississippi River traffic and inland commerce but remained under Spanish control until 1803. The Louisiana Purchase of 1803 Thomas Jefferson thought of himself as a man of the frontier and was keenly interested in expanding and exploring the West. Jefferson's Louisiana Purchase of 1803 doubled the size of the nation at the cost of $15 million, or about $0.04 per acre ($ million in dollars, less than 42 cents per acre). Federalists opposed the expansion, but Jeffersonians hailed the opportunity to create millions of new farms to expand the domain of land-owning yeomen; the ownership would strengthen the ideal republican society, based on agriculture (not commerce), governed lightly, and promoting self-reliance and virtue, as well as form the political base for Jeffersonian Democracy. France was paid for its sovereignty over the territory in terms of international law. Between 1803 and the 1870s, the federal government purchased the land from the Indian tribes then in possession of it. 20th-century accountants and courts have calculated the value of the payments made to the Indians, which included future payments of cash, food, horses, cattle, supplies, buildings, schooling, and medical care. In cash terms, the total paid to the tribes in the area of the Louisiana Purchase amounted to about $2.6 billion, or nearly $9 billion in 2016 dollars. Additional sums were paid to the Indians living east of the Mississippi for their lands, as well as payments to Indians living in parts of the west outside the Louisiana Purchase. Even before the purchase, Jefferson was planning expeditions to explore and map the lands. He charged Lewis and Clark to "explore the Missouri River, and such principal stream of it, as, by its course and communication with the waters of the Pacific Ocean; whether the Columbia, Oregon, Colorado, or any other river may offer the most direct and practicable communication across the continent for commerce". Jefferson also instructed the expedition to study the region's native tribes (including their morals, language, and culture), weather, soil, rivers, commercial trading, and animal and plant life. Entrepreneurs, most notably John Jacob Astor quickly seized the opportunity and expanded fur trading operations into the Pacific Northwest. Astor's "Fort Astoria" (later Fort George), at the mouth of the Columbia River, became the first permanent white settlement in that area, although it was not profitable for Astor. He set up the American Fur Company in an attempt to break the hold that the Hudson's Bay Company monopoly had over the region. By 1820, Astor had taken over independent traders to create a profitable monopoly; he left the business as a multi-millionaire in 1834. The fur trade As the frontier moved west, trappers and hunters moved ahead of settlers, searching out new supplies of beaver and other skins for shipment to Europe. The hunters were the first Europeans in much of the Old West and they formed the first working relationships with the Native Americans in the West. They added extensive knowledge of the Northwest terrain, including the important South Pass through the central Rocky Mountains. Discovered about 1812, it later became a major route for settlers to Oregon and Washington. By 1820, however, a new "brigade-rendezvous" system sent company men in "brigades" cross-country on long expeditions, bypassing many tribes. It also encouraged "free trappers" to explore new regions on their own. At the end of the gathering season, the trappers would "rendezvous" and turn in their goods for pay at river ports along the Green River, Upper Missouri, and the Upper Mississippi. St. Louis was the largest of the rendezvous towns. By 1830, however, fashions changed and beaver hats were replaced by silk hats, ending the demand for expensive American furs. Thus ended the era of the mountain men, trappers, and scouts such as Jedediah Smith, Hugh Glass, Davy Crockett, Jack Omohundro, and others. The trade in beaver fur virtually ceased by 1845. The federal government and westward expansion There was wide agreement on the need to settle the new territories quickly, but the debate polarized over the price the government should charge. The conservatives and Whigs, typified by the president John Quincy Adams, wanted a moderated pace that charged the newcomers enough to pay the costs of the federal government. The Democrats, however, tolerated a wild scramble for land at very low prices. The final resolution came in the Homestead Law of 1862, with a moderated pace that gave settlers 160 acres free after they worked on it for five years. The private profit motive dominated the movement westward, but the Federal Government played a supporting role in securing the land through treaties and setting up territorial governments, with governors appointed by the President. The federal government first acquired western territory through treaties with other nations or native tribes. Then it sent surveyors to map and document the land. By the 20th century Washington bureaucracies managed the federal lands such as the General Land Office in the Interior Department, and after 1891 the Forest Service in the Department of Agriculture. After 1900 dam building and flood control became major concerns. Transportation was a key issue and the Army (especially the Army Corps of Engineers) was given full responsibility for facilitating navigation on the rivers. The steamboat, first used on the Ohio River in 1811, made possible inexpensive travel using the river systems, especially the Mississippi and Missouri rivers and their tributaries. Army expeditions up the Missouri River in 1818–25 allowed engineers to improve the technology. For example, the Army's steamboat "Western Engineer" of 1819 combined a very shallow draft with one of the earliest stern wheels. In 1819–25, Colonel Henry Atkinson developed keelboats with hand-powered paddle wheels. The federal postal system played a crucial role in national expansion. It facilitated expansion into the West by creating an inexpensive, fast, convenient communication system. Letters from early settlers provided information and boosterism to encourage increased migration to the West, helped scattered families stay in touch and provide neutral help, assisted entrepreneurs to find business opportunities, and made possible regular commercial relationships between merchants and the West and wholesalers and factories back east. The postal service likewise assisted the Army in expanding control over the vast western territories. The widespread circulation of important newspapers by mail, such as the New York Weekly Tribune, facilitated coordination among politicians in different states. The postal service helped to integrate already established areas with the frontier, creating a spirit of nationalism and providing a necessary infrastructure. The army early on assumed the mission of protecting settlers along with the Westward Expansion Trails, a policy that was described by Secretary of War John B. Floyd in 1857: A line of posts running parallel without frontier, but near to the Indians' usual habitations, placed at convenient distances and suitable positions, and occupied by infantry, would exercise a salutary restraint upon the tribes, who would feel that any foray by their warriors upon the white settlements would meet with prompt retaliation upon their own homes. There was a debate at the time about the best size for the forts with Jefferson Davis, Winfield Scott, and Thomas Jesup supporting forts that were larger but fewer in number than Floyd. Floyd's plan was more expensive but had the support of settlers and the general public who preferred that the military remain as close as possible. The frontier area was vast and even Davis conceded that "concentration would have exposed portions of the frontier to Indian hostilities without any protection." Scientists, artists, and explorers Government and private enterprise sent many explorers to the West. In 1805–1806, Army lieutenant Zebulon Pike (1779–1813) led a party of 20 soldiers to find the headwaters of the Mississippi. He later explored the Red and Arkansas Rivers in Spanish territory, eventually reaching the Rio Grande. On his return, Pike sighted the peak in Colorado named after him. Major Stephen Harriman Long (1784–1864) led the Yellowstone and Missouri expeditions of 1819–1820, but his categorizing in 1823 of the Great Plains as arid and useless led to the region getting a bad reputation as the "Great American Desert", which discouraged settlement in that area for several decades. In 1811, naturalists Thomas Nuttall (1786–1859) and John Bradbury (1768–1823) traveled up the Missouri River documenting and drawing plant and animal life. Artist George Catlin (1796–1872) painted accurate paintings of Native American culture. Swiss artist Karl Bodmer made compelling landscapes and portraits. John James Audubon (1785–1851) is famous for classifying and painting in minute details 500 species of birds, published in Birds of America. The most famous of the explorers was John Charles Frémont (1813–1890), an Army officer in the Corps of Topographical Engineers. He displayed a talent for exploration and a genius at self-promotion that gave him the sobriquet of "Pathmarker of the West" and led him to the presidential nomination of the new Republican Party in 1856. He led a series of expeditions in the 1840s which answered many of the outstanding geographic questions about the little-known region. He crossed through the Rocky Mountains by five different routes and mapped parts of Oregon and California. In 1846–1847, he played a role in conquering California. In 1848–1849, Frémont was assigned to locate a central route through the mountains for the proposed transcontinental railroad, but his expedition ended in near-disaster when it became lost and was trapped by heavy snow. His reports mixed narrative of exciting adventure with scientific data and detailed practical information for travelers. It caught the public imagination and inspired many to head west. Goetzman says it was "monumental in its breadth, a classic of exploring literature". While colleges were springing up across the Northeast, there was little competition on the western frontier for Transylvania University, founded in Lexington, Kentucky, in 1780. It boasted of a law school in addition to its undergraduate and medical programs. Transylvania attracted politically ambitious young men from across the Southwest, including 50 who became United States senators, 101 representatives, 36 governors, and 34 ambassadors, as well as Jefferson Davis, the president of the Confederacy. Antebellum West Religion The established Eastern churches were slow to meet the needs of the frontier. The Presbyterians and Congregationalists, since they depended on well-educated ministers, were shorthanded in evangelizing the frontier. They set up a Plan of Union of 1801 to combine resources on the frontier. Most frontiersmen showed little commitment to religion until traveling evangelists began to appear and to produce "revivals". The local pioneers responded enthusiastically to these events and, in effect, evolved their populist religions, especially during the Second Great Awakening (1790–1840), which featured outdoor camp meetings lasting a week or more and which introduced many people to organized religion for the first time. One of the largest and most famous camp meetings took place at Cane Ridge, Kentucky, in 1801. The local Baptists set up small independent churches—Baptists abjured centralized authority; each local church was founded on the principle of independence of the local congregation. On the other hand, bishops of the well-organized, centralized Methodists assigned circuit riders to specific areas for several years at a time, then moved them to fresh territory. Several new denominations were formed, of which the largest was the Disciples of Christ. Democracy in the Midwest Historian Mark Wyman calls Wisconsin a "palimpsest" of layer upon layer of peoples and forces, each imprinting permanent influences. He identified these layers as multiple "frontiers" over three centuries: Native American frontier, French frontier, English frontier, fur-trade frontier, mining frontier, and the logging frontier. Finally, the coming of the railroad brought the end of the frontier. Frederick Jackson Turner grew up in Wisconsin during its last frontier stage, and in his travels around the state, he could see the layers of social and political development. One of Turner's last students, Merle Curti used an in-depth analysis of local Wisconsin history to test Turner's thesis about democracy. Turner's view was that American democracy, "involved widespread participation in the making of decisions affecting the common life, the development of initiative and self-reliance, and equality of economic and cultural opportunity. It thus also involved Americanization of immigrant." Curti found that from 1840 to 1860 in Wisconsin the poorest groups gained rapidly in land ownership, and often rose to political leadership at the local level. He found that even landless young farmworkers were soon able to obtain their farms. Free land on the frontier, therefore, created opportunity and democracy, for both European immigrants as well as old stock Yankees. Southwest From the 1770s to the 1830s, pioneers moved into the new lands that stretched from Kentucky to Alabama to Texas. Most were farmers who moved in family groups. Historian Louis Hacker shows how wasteful the first generation of pioneers was; they were too ignorant to cultivate the land properly and when the natural fertility of virgin land was used up, they sold out and moved west to try again. Hacker describes that in Kentucky about 1812: Hacker adds that the second wave of settlers reclaimed the land, repaired the damage, and practiced more sustainable agriculture. Historian Frederick Jackson Turner explored the individualistic worldview and values of the first generation: Manifest Destiny Manifest Destiny was the belief that the United States was preordained to expand from the Atlantic coast to the Pacific coast. The concept was expressed during Colonial times, but the term was coined in the 1840s by a popular magazine which editorialized, "the fulfillment of our manifest destiny...to overspread the continent allotted by Providence for the free development of our yearly multiplying millions." As the nation grew, "Manifest Destiny" became a rallying cry for expansionists in the Democratic Party. In the 1840s the Tyler and Polk administrations (1841–49) successfully promoted this nationalistic doctrine. However, the Whig Party, which represented business and financial interests, stood opposed to Manifest Destiny. Whig leaders such as Henry Clay and Abraham Lincoln called for deepening the society through modernization and urbanization instead of simple horizontal expansion. Starting with the annexation of Texas, the expansionists got the upper hand. John Quincy Adams, an anti-slavery Whig, felt the Texas annexation in 1845 to be "the heaviest calamity that ever befell myself and my country". Helping settlers move westward were the emigrant "guide books" of the 1840s featuring route information supplied by the fur traders and the Frémont expeditions, and promising fertile farmland beyond the Rockies. Mexico and Texas Mexico became independent of Spain in 1821 and took over Spain's northern possessions stretching from Texas to California. Caravans began delivering goods to Mexico's Santa Fe along the Santa Fe Trail, over the journey which took 48 days from Kansas City, Missouri (then known as Westport). Santa Fe was also the trailhead for the "El Camino Real" (the King's Highway), a trade route which carried American manufactured goods southward deep into Mexico and returned silver, furs, and mules northward (not to be confused with another "Camino Real" which connected the missions in California). A branch also ran eastward near the Gulf (also called the Old San Antonio Road). Santa Fe connected to California via the Old Spanish Trail. The Spanish and Mexican governments attracted American settlers to Texas with generous terms. Stephen F. Austin became an "empresario", receiving contracts from the Mexican officials to bring in immigrants. In doing so, he also became the de facto political and military commander of the area. Tensions rose, however, after an abortive attempt to establish the independent nation of Fredonia in 1826. William Travis, leading the "war party", advocated for independence from Mexico, while the "peace party" led by Austin attempted to get more autonomy within the current relationship. When Mexican president Santa Anna shifted alliances and joined the conservative Centralist party, he declared himself dictator and ordered soldiers into Texas to curtail new immigration and unrest. However, immigration continued and 30,000 Anglos with 3,000 slaves were settled in Texas by 1835. In 1836, the Texas Revolution erupted. Following losses at the Alamo and Goliad, the Texians won the decisive Battle of San Jacinto to secure independence. At San Jacinto, Sam Houston, commander-in-chief of the Texian Army and future President of the Republic of Texas famously shouted "Remember the Alamo! Remember Goliad". The U.S. Congress declined to annex Texas, stalemated by contentious arguments over slavery and regional power. Thus, the Republic of Texas remained an independent power for nearly a decade before it was annexed as the 28th state in 1845. The government of Mexico, however, viewed Texas as a runaway province and asserted its ownership. The Mexican–American War Mexico refused to recognize the independence of Texas in 1836, but the U.S. and European powers did so. Mexico threatened war if Texas joined the U.S., which it did in 1845. American negotiators were turned away by a Mexican government in turmoil. When the Mexican army killed 16 American soldiers in disputed territory war was at hand. Whigs, such as Congressman Abraham Lincoln denounced the war, but it was quite popular outside New England. The Mexican strategy was defensive; the American strategy was a three-pronged offensive, using large numbers of volunteer soldiers. Overland forces seized New Mexico with little resistance and headed to California, which quickly fell to the American land and naval forces. From the main American base at New Orleans, General Zachary Taylor led forces into northern Mexico, winning a series of battles that ensued. The U.S. Navy transported General Winfield Scott to Veracruz. He then marched his 12,000-man force west to Mexico City, winning the final battle at Chapultepec. Talk of acquiring all of Mexico fell away when the army discovered the Mexican political and cultural values were so alien to America's. As the Cincinnati Herald asked, what would the U.S. do with eight million Mexicans "with their idol worship, heathen superstition, and degraded mongrel races?" The Treaty of Guadalupe Hidalgo of 1848 ceded the territories of California and New Mexico to the United States for $18.5 million (which included the assumption of claims against Mexico by settlers). The Gadsden Purchase in 1853 added southern Arizona, which was needed for a railroad route to California. In all Mexico ceded half a million square miles (1.3 million km2) and included the states-to-be of California, Utah, Arizona, Nevada, New Mexico, and parts of Colorado and Wyoming, in addition to Texas. Managing the new territories and dealing with the slavery issue caused intense controversy, particularly over the Wilmot Proviso, which would have outlawed slavery in the new territories. Congress never passed it, but rather temporarily resolved the issue of slavery in the West with the Compromise of 1850. California entered the Union in 1850 as a free state; the other areas remained territories for many years. Growth of Texas The new state grew rapidly as migrants poured into the fertile cotton lands of east Texas. German immigrants started to arrive in the early 1840s because of negative economic, social and political pressures in Germany. With their investments in cotton lands and slaves, planters established cotton plantations in the eastern districts. The central area of the state was developed more by subsistence farmers who seldom owned slaves. Texas in its Wild West days attracted men who could shoot straight and possessed the zest for adventure, "for masculine renown, patriotic service, martial glory, and meaningful deaths". The California Gold Rush In 1846 about 10,000 Californios (Hispanics) lived in California, primarily on cattle ranches in what is now the Los Angeles area. A few hundred foreigners were scattered in the northern districts, including some Americans. With the outbreak of war with Mexico in 1846 the U.S. sent in Frémont and a U.S. Army unit, as well as naval forces, and quickly took control. As the war was ending, gold was discovered in the north, and the word soon spread worldwide. Thousands of "Forty-Niners" reached California, by sailing around South America (or taking a short-cut through disease-ridden Panama), or walked the California trail. The population soared to over 200,000 in 1852, mostly in the gold districts that stretched into the mountains east of San Francisco. Housing in San Francisco was at a premium, and abandoned ships whose crews had headed for the mines were often converted to temporary lodging. In the goldfields themselves, living conditions were primitive, though the mild climate proved attractive. Supplies were expensive and food poor, typical diets consisting mostly of pork, beans, and whiskey. These highly male, transient communities with no established institutions were prone to high levels of violence, drunkenness, profanity, and greed-driven behavior. Without courts or law officers in the mining communities to enforce claims and justice, miners developed their ad hoc legal system, based on the "mining codes" used in other mining communities abroad. Each camp had its own rules and often handed out justice by popular vote, sometimes acting fairly and at times exercising vigilantes; with Indians (Native Americans), Mexicans, and Chinese generally receiving the harshest sentences. The gold rush radically changed the California economy and brought in an array of professionals, including precious metal specialists, merchants, doctors, and attorneys, who added to the population of miners, saloon keepers, gamblers, and prostitutes. A San Francisco newspaper stated, "The whole country... resounds to the sordid cry of gold! Gold! Gold! while the field is left half planted, the house half-built, and everything neglected but the manufacture of shovels and pickaxes." Over 250,000 miners found a total of more than $200 million in gold in the five years of the California Gold Rush. As thousands arrived, however, fewer and fewer miners struck their fortune, and most ended exhausted and broke. Violent bandits often preyed upon the miners, such as the case of Jonathan R. Davis' killing of eleven bandits single-handedly. Camps spread out north and south of the American River and eastward into the Sierras. In a few years, nearly all of the independent miners were displaced as mines were purchased and run by mining companies, who then hired low-paid salaried miners. As gold became harder to find and more difficult to extract, individual prospectors gave way to paid work gangs, specialized skills, and mining machinery. Bigger mines, however, caused greater environmental damage. In the mountains, shaft mining predominated, producing large amounts of waste. Beginning in 1852, at the end of the '49 gold rush, through 1883, hydraulic mining was used. Despite huge profits being made, it fell into the hands of a few capitalists, displaced numerous miners, vast amounts of waste entered river systems, and did heavy ecological damage to the environment. Hydraulic mining ended when the public outcry over the destruction of farmlands led to the outlawing of this practice. The mountainous areas of the triangle from New Mexico to California to South Dakota contained hundreds of hard rock mining sites, where prospectors discovered gold, silver, copper and other minerals (as well as some soft-rock coal). Temporary mining camps sprang up overnight; most became ghost towns when the ores were depleted. Prospectors spread out and hunted for gold and silver along the Rockies and in the southwest. Soon gold was discovered in Colorado, Utah, Arizona, New Mexico, Idaho, Montana, and South Dakota (by 1864). The discovery of the Comstock Lode, containing vast amounts of silver, resulted in the Nevada boomtowns of Virginia City, Carson City, and Silver City. The wealth from silver, more than from gold, fueled the maturation of San Francisco in the 1860s and helped the rise of some of its wealthiest families, such as that of George Hearst. The Oregon Trail To get to the rich new lands of the West Coast, there were two options: some sailed around the southern tip of South America during a six-month voyage, but 400,000 others walked there on an overland route of more than 2,000 miles (3,200 km); their wagon trains usually left from Missouri. They moved in large groups under an experienced wagonmaster, bringing their clothing, farm supplies, weapons, and animals. These wagon trains followed major rivers, crossed prairies and mountains, and typically ended in Oregon and California. Pioneers generally attempted to complete the journey during a single warm season, usually for six months. By 1836, when the first migrant wagon train was organized in Independence, Missouri, a wagon trail had been cleared to Fort Hall, Idaho. Trails were cleared further and further west, eventually reaching the Willamette Valley in Oregon. This network of wagon trails leading to the Pacific Northwest was later called the Oregon Trail. The eastern half of the route was also used by travelers on the California Trail (from 1843), Mormon Trail (from 1847), and Bozeman Trail (from 1863) before they turned off to their separate destinations. In the "Wagon Train of 1843", some 700 to 1,000 emigrants headed for Oregon; missionary Marcus Whitman led the wagons on the last leg. In 1846, the Barlow Road was completed around Mount Hood, providing a rough but passable wagon trail from the Missouri River to the Willamette Valley: about 2,000 miles (3,200 km). Though the main direction of travel on the early wagon trails was westward, people also used the Oregon Trail to travel eastward. Some did so because they were discouraged and defeated. Some returned with bags of gold and silver. Most were returning to pick up their families and move them all back west. These "gobacks" were a major source of information and excitement about the wonders and promises—and dangers and disappointments—of the far West. Not all emigrants made it to their destination. The dangers of the overland route were numerous: snakebites, wagon accidents, violence from other travelers, suicide, malnutrition, stampedes, Indian attacks, a variety of diseases (dysentery, typhoid, and cholera were among the most common), exposure, avalanches, etc. One particularly well-known example of the treacherous nature of the journey is the story of the ill-fated Donner Party, which became trapped in the Sierra Nevada mountains during the winter of 1846–1847 in which nearly half of the 90 people traveling with the group died from starvation and exposure, and some resorted to cannibalism to survive. Another story of cannibalism featured Alfred Packer and his trek to Colorado in 1874. There were also frequent attacks from bandits and highwaymen, such as the infamous Harpe brothers who patrolled the frontier routes and targeted migrant groups. Mormons and Utah In Missouri and Illinois, animosity between the Mormon settlers and locals grew, which would mirror those in other states such as Utah years later. Violence finally erupted on October 24, 1838, when militias from both sides clashed and a mass killing of Mormons in Livingston County occurred 6 days later. A Mormon Extermination Order was filed during these conflicts, and the Mormons were forced to scatter. Brigham Young, seeking to leave American jurisdiction to escape religious persecution in Illinois and Missouri, led the Mormons to the valley of the Great Salt Lake, owned at the time by Mexico but not controlled by them. A hundred rural Mormon settlements sprang up in what Young called "Deseret", which he ruled as a theocracy. It later became Utah Territory. Young's Salt Lake City settlement served as the hub of their network, which reached into neighboring territories as well. The communalism and advanced farming practices of the Mormons enabled them to succeed. The Mormons often sold goods to wagon trains passing through and came to terms with local Indian tribes because Young decided it was cheaper to feed the Indians than fight them. Education became a high priority to protect the beleaguered group, reduce heresy and maintain group solidarity. Following the end of the Mexican-American War in 1848, Utah was ceded to the United States by Mexico. Though the Mormons in Utah had supported U.S. efforts during the war; the federal government, pushed by the Protestant churches, rejected theocracy and polygamy. Founded in 1852, the Republican Party was openly hostile towards The Church of Jesus Christ of Latter-day Saints (LDS Church) in Utah over the practice of polygamy, viewed by most of the American public as an affront to religious, cultural, and moral values of modern civilization. Confrontations verged on open warfare in the late 1850s as President Buchanan sent in troops. Although there were no military battles fought, and negotiations led to a stand down, violence still escalated and there were several casualties. After the Civil War the federal government systematically took control of Utah, the LDS Church was legally disincorporated in the territory and members of the church's hierarchy, including Young, were summarily removed and barred from virtually every public office. Meanwhile, successful missionary work in the U.S. and Europe brought a flood of Mormon converts to Utah. During this time, Congress refused to admit Utah into the Union as a state and statehood would mean an end to direct federal control over the territory and the possible ascension of politicians chosen and controlled by the LDS Church into most if not all federal, state and local elected offices from the new state. Finally, in 1890, the church leadership announced polygamy was no longer a central tenet, thereafter a compromise. In 1896, Utah was admitted as the 45th state with the Mormons dividing between Republicans and Democrats. The Pony Express and the telegraph The federal government provided subsidies for the development of mail and freight delivery, and by 1856, Congress authorized road improvements and an overland mail service to California. The new commercial wagon trains service primarily hauled freight. In 1858 John Butterfield (1801–69) established a stage service that went from Saint Louis to San Francisco in 24 days along a southern route. This route was abandoned in 1861 after Texas joined the Confederacy, in favor of stagecoach services established via Fort Laramie and Salt Lake City, a 24-day journey, with Wells Fargo & Co. as the foremost provider (initially using the old "Butterfield" name). William Russell, hoping to get a government contract for more rapid mail delivery service, started the Pony Express in 1860, cutting delivery time to ten days. He set up over 150 stations about apart. In 1861 Congress passed the Land-Grant Telegraph Act which financed the construction of Western Union's transcontinental telegraph lines. Hiram Sibley, Western Union's head, negotiated exclusive agreements with railroads to run telegraph lines along their right-of-way. Eight years before the transcontinental railroad opened, the First Transcontinental Telegraph linked Omaha, Nebraska, to San Francisco on October 24, 1861. The Pony Express ended in just 18 months because it could not compete with the telegraph. Bleeding Kansas Constitutionally, Congress could not deal with slavery in the states but it did have jurisdiction in the western territories. California unanimously rejected slavery in 1850 and became a free state. New Mexico allowed slavery, but it was rarely seen there. Kansas was off-limits to slavery by the Compromise of 1820. Free Soil elements feared that if slavery were allowed rich planters would buy up the best lands and work them with gangs of slaves, leaving little opportunity for free white men to own farms. Few Southern planters were interested in Kansas, but the idea that slavery was illegal there implied they had a second-class status that was intolerable to their sense of honor, and seemed to violate the principle of state's rights. With the passage of the extremely controversial Kansas–Nebraska Act in 1854, Congress left the decision up to the voters on the ground in Kansas. Across the North, a new major party was formed to fight slavery: the Republican Party, with numerous westerners in leadership positions, most notably Abraham Lincoln of Illinois. To influence the territorial decision, anti-slavery elements (also called "Jayhawkers" or "Free-soilers") financed the migration of politically determined settlers. But pro-slavery advocates fought back with pro-slavery settlers from Missouri. Violence on both sides was the result; in all 56 men were killed by the time the violence abated in 1859. By 1860 the pro-slavery forces were in control—but Kansas had only two slaves. The antislavery forces took over by 1861, as Kansas became a free state. The episode demonstrated that a democratic compromise between North and South over slavery was impossible and served to hasten the Civil War. The Civil War in the West Despite its large territory, the trans-Mississippi West had a small population and its wartime story has to a large extent been underplayed in the historiography of the American Civil War. The Trans-Mississippi theater The Confederacy engaged in several important campaigns in the West. However, Kansas, a major area of conflict building up to the war, was the scene of only one battle, at Mine Creek. But its proximity to Confederate lines enabled pro-Confederate guerrillas, such as Quantrill's Raiders, to attack Union strongholds and massacre the residents. In Texas, citizens voted to join the Confederacy; anti-war Germans were hanged. Local troops took over the federal arsenal in San Antonio, with plans to grab the territories of northern New Mexico, Utah, and Colorado, and possibly California. Confederate Arizona was created by Arizona citizens who wanted protection against Apache raids after the United States Army units were moved out. The Confederacy then sets its sight to gain control of the New Mexico Territory. General Henry Hopkins Sibley was tasked for the campaign, and together with his New Mexico Army, marched right up the Rio Grande in an attempt to take the mineral wealth of Colorado as well as California. The First Regiment of Volunteers discovered the rebels, and they immediately warned and joined the Yankees at Fort Union. The Battle of Glorieta Pass soon erupted, and the Union ended the Confederate campaign and the area west of Texas remained in Union hands. Missouri, a Union state where slavery was legal, became a battleground when the pro-secession governor, against the vote of the legislature, led troops to the federal arsenal at St. Louis; he was aided by Confederate forces from Arkansas and Louisiana. However, Union General Samuel Curtis regained St. Louis and all of Missouri for the Union. The state was the scene of numerous raids and guerrilla warfare in the west. Peacekeeping The U.S. Army after 1850 established a series of military posts across the frontier, designed to stop warfare among Indian tribes or between Indians and settlers. Throughout the 19th century, Army officers typically built their careers in peacekeeper roles moving from fort to fort until retirement. Actual combat experience was uncommon for any one soldier. The most dramatic conflict was the Sioux war in Minnesota in 1862 when Dakota tribes systematically attacked German farms to drive out the settlers. For several days, Dakota attacks at the Lower Sioux Agency, New Ulm and Hutchinson, killed 300 to 400 white settlers. The state militia fought back and Lincoln sent in federal troops. The ensuing battles at Fort Ridgely, Birch Coulee, Fort Abercrombie, and Wood Lake punctuated a six-week war, which ended in an American victory. The federal government tried 425 Indians for murder, and 303 were convicted and sentenced to death. Lincoln pardoned the majority, but 38 leaders were hanged. The decreased presence of Union troops in the West left behind untrained militias; hostile tribes used the opportunity to attack settlers. The militia struck back hard, most notably by attacking the winter quarters of the Cheyenne and Arapaho Indians, filled with women and children, at the Sand Creek massacre in eastern Colorado in late 1864. Kit Carson and the U.S. Army in 1864 trapped the entire Navajo tribe in New Mexico, where they had been raiding settlers and put them on a reservation. Within the Indian Territory, now Oklahoma, conflicts arose among the Five Civilized Tribes, most of which sided with the South being slaveholders themselves. In 1862, Congress enacted two major laws to facilitate settlement of the West: the Homestead Act and the Pacific Railroad Act. The result by 1890 was millions of new farms in the Plains states, many operated by new immigrants from Germany and Scandinavia. The Postbellum West Territorial governance after the Civil War With the war over and slavery abolished, the federal government focused on improving the governance of the territories. It subdivided several territories, preparing them for statehood, following the precedents set by the Northwest Ordinance of 1787. It standardized procedures and the supervision of territorial governments, taking away some local powers, and imposing much "red tape", growing the federal bureaucracy significantly. Federal involvement in the territories was considerable. In addition to direct subsidies, the federal government maintained military posts, provided safety from Indian attacks, bankrolled treaty obligations, conducted surveys and land sales, built roads, staffed land offices, made harbor improvements, and subsidized overland mail delivery. Territorial citizens came to both decry federal power and local corruption, and at the same time, lament that more federal dollars were not sent their way. Territorial governors were political appointees and beholden to Washington so they usually governed with a light hand, allowing the legislatures to deal with the local issues. In addition to his role as civil governor, a territorial governor was also a militia commander, a local superintendent of Indian affairs, and the state liaison with federal agencies. The legislatures, on the other hand, spoke for the local citizens and they were given considerable leeway by the federal government to make local law. These improvements to governance still left plenty of room for profiteering. As Mark Twain wrote while working for his brother, the secretary of Nevada, "The government of my country snubs honest simplicity but fondles artistic villainy, and I think I might have developed into a very capable pickpocket if I had remained in the public service a year or two." "Territorial rings", corrupt associations of local politicians and business owners buttressed with federal patronage, embezzled from Indian tribes and local citizens, especially in the Dakota and New Mexico territories. Federal land system In acquiring, preparing, and distributing public land to private ownership, the federal government generally followed the system set forth by the Land Ordinance of 1785. Federal exploration and scientific teams would undertake reconnaissance of the land and determine Native American habitation. Through treaties, the land titles would be ceded by the resident tribes. Then surveyors would create detailed maps marking the land into squares of six miles (10 km) on each side, subdivided first into one square mile blocks, then into lots. Townships would be formed from the lots and sold at public auction. Unsold land could be purchased from the land office at a minimum price of $1.25 per acre. As part of public policy, the government would award public land to certain groups such as veterans, through the use of "land script". The script traded in a financial market, often at below the $1.25 per acre minimum price set by law, which gave speculators, investors, and developers another way to acquire large tracts of land cheaply. Land policy became politicized by competing factions and interests, and the question of slavery on new lands was contentious. As a counter to land speculators, farmers formed "claims clubs" to enable them to buy larger tracts than the allotments by trading among themselves at controlled prices. In 1862, Congress passed three important bills that transformed the land system. The Homestead Act granted free to each settler who improved the land for five years; citizens and non-citizens including squatters and women were all eligible. The only cost was a modest filing fee. The law was especially important in the settling of the Plains states. Many took a free homestead and others purchased their land from railroads at low rates. The Pacific Railroad Act of 1862 provided for the land needed to build the transcontinental railroad. The land was given the railroads alternated with government-owned tracts saved for free distribution to homesteaders. To be equitable, the federal government reduced each tract to because of its perceived higher value given its proximity to the rail line. Railroads had up to five years to sell or mortgage their land, after tracks were laid, after which unsold land could be purchased by anyone. Often railroads sold some of their government acquired land to homesteaders immediately to encourage settlement and the growth of markets the railroads would then be able to serve. Nebraska railroads in the 1870s were strong boosters of lands along their routes. They sent agents to Germany and Scandinavia with package deals that included cheap transportation for the family as well as its furniture and farm tools, and they offered long-term credit at low rates. Boosterism succeeded in attracting adventurous American and European families to Nebraska, helping them purchase land grant parcels on good terms. The selling price depended on such factors as soil quality, water, and distance from the railroad. The Morrill Act of 1862 provided land grants to states to begin colleges of agriculture and mechanical arts (engineering). Black colleges became eligible for these land grants in 1890. The Act succeeded in its goals to open new universities and make farming more scientific and profitable. Transcontinental railroads In the 1850s government-sponsored surveys to chart the remaining unexplored regions of the West, and to plan possible routes for a transcontinental railroad. Much of this work was undertaken by the Corps of Engineers, Corps of Topographical Engineers, and Bureau of Explorations and Surveys, and became known as "The Great Reconnaissance". Regionalism animated debates in Congress regarding the choice of a northern, central, or southern route. Engineering requirements for the rail route were an adequate supply of water and wood, and as nearly-level route as possible, given the weak locomotives of the era. In the 1850s, proposals to build a transcontinental failed because of Congressional disputes over slavery. With the secession of the Confederate states in 1861, the modernizers in the Republican party took over Congress and wanted a line to link to California. Private companies were to build and operate the line. Construction would be done by unskilled laborers who would live in temporary camps along the way. Immigrants from China and Ireland did most of the construction work. Theodore Judah, the chief engineer of the Central Pacific surveyed the route from San Francisco east. Judah's tireless lobbying efforts in Washington were largely responsible for the passage of the 1862 Pacific Railroad Act, which authorized construction of both the Central Pacific and the Union Pacific (which built west from Omaha). In 1862 four rich San Francisco merchants (Leland Stanford, Collis Huntington, Charles Crocker, and Mark Hopkins) took charge, with Crocker in charge of construction. The line was completed in May 1869. Coast-to-coast passenger travel in 8 days now replaced wagon trains or sea voyages that took 6 to 10 months and cost much more. The road was built with mortgages from New York, Boston, and London, backed by land grants. There were no federal cash subsidies, But there was a loan to the Central Pacific that was eventually repaid at six percent interest. The federal government offered land-grants in a checkerboard pattern. The railroad sold every-other square, with the government opening its half to homesteaders. The government also loaned money—later repaid—at $16,000 per mile on level stretches, and $32,000 to $48,000 in mountainous terrain. Local and state governments also aided the financing. Most of the manual laborers on the Central Pacific were new arrivals from China. Kraus shows how these men lived and worked, and how they managed their money. He concludes that senior officials quickly realized the high degree of cleanliness and reliability of the Chinese. The Central Pacific employed over 12,000 Chinese workers, 90% of its manual workforce. Ong explores whether or not the Chinese Railroad Workers were exploited by the railroad, with whites in better positions. He finds the railroad set different wage rates for whites and Chinese and used the latter in the more menial and dangerous jobs, such as the handling and the pouring of nitroglycerin. However the railroad also provided camps and food the Chinese wanted and protected the Chinese workers from threats from whites. Building the railroad required six main activities: surveying the route, blasting a right of way, building tunnels and bridges, clearing and laying the roadbed, laying the ties and rails, and maintaining and supplying the crews with food and tools. The work was highly physical, using horse-drawn plows and scrapers, and manual picks, axes, sledgehammers, and handcarts. A few steam-driven machines, such as shovels, were used. The rails were iron (steel came a few years later) and weighed . and required five men to lift. For blasting, they used black powder. The Union Pacific construction crews, mostly Irish Americans, averaged about two miles (3 km) of new track per day. Six transcontinental railroads were built in the Gilded Age (plus two in Canada); they opened up the West to farmers and ranchers. From north to south they were the Northern Pacific, Milwaukee Road, and Great Northern along the Canada–US border; the Union Pacific/Central Pacific in the middle, and to the south the Santa Fe, and the Southern Pacific. All but the Great Northern of James J. Hill relied on land grants. The financial stories were often complex. For example, the Northern Pacific received its major land grant in 1864. Financier Jay Cooke (1821–1905) was in charge until 1873 when he went bankrupt. Federal courts, however, kept bankrupt railroads in operation. In 1881 Henry Villard (1835–1900) took over and finally completed the line to Seattle. But the line went bankrupt in the Panic of 1893 and Hill took it over. He then merged several lines with financing from J.P. Morgan, but President Theodore Roosevelt broke them up in 1904. In the first year of operation, 1869–70, 150,000 passengers made the long trip. Settlers were encouraged with promotions to come West on free scouting trips to buy railroad land on easy terms spread over several years. The railroads had "Immigration Bureaus" which advertised package low-cost deals including passage and land on easy terms for farmers in Germany and Scandinavia. The prairies, they were promised, did not mean backbreaking toil because "settling on the prairie which is ready for the plow is different from plunging into a region covered with timber". The settlers were customers of the railroads, shipping their crops and cattle out, and bringing in manufactured products. All manufacturers benefited from the lower costs of transportation and the much larger radius of business. White concludes with a mixed verdict. The transcontinentals did open up the West to settlement, brought in many thousands of high-tech, highly paid workers and managers, created thousands of towns and cities, oriented the nation onto an east–west axis, and proved highly valuable for the nation as a whole. On the other hand, too many were built, and they were built too far ahead of actual demand. The result was a bubble that left heavy losses to investors and led to poor management practices. By contrast, as White notes, the lines in the Midwest and East supported by a very large population base, fostered farming, industry, and mining while generating steady profits and receiving few government benefits. Migration after the Civil War After the Civil War, many from the East Coast and Europe were lured west by reports from relatives and by extensive advertising campaigns promising "the Best Prairie Lands", "Low Prices", "Large Discounts For Cash", and "Better Terms Than Ever!". The new railroads provided the opportunity for migrants to go out and take a look, with special family tickets, the cost of which could be applied to land purchases offered by the railroads. Farming the plains was indeed more difficult than back east. Water management was more critical, lightning fires were more prevalent, the weather was more extreme, rainfall was less predictable. The fearful stayed home. The actual migrants looked beyond fears of the unknown. Their chief motivation to move west was to find a better economic life than the one they had. Farmers sought larger, cheaper, and more fertile land; merchants and tradesmen sought new customers and new leadership opportunities. Laborers wanted higher paying work and better conditions. As settlers moved west, they had to face challenges along the way, such as the lack of wood for housing, bad weather like blizzards and droughts, and fearsome tornadoes. In the treeless prairies homesteaders built sod houses. One of the greatest plagues that hit the homesteaders was the 1874 Locust Plague which devastated the Great Plains. These challenges hardened these settlers in taming the frontier. Alaska Purchase After Russia's defeat in the Crimean War, Tsar Alexander II of Russia decided to sell the Russian American territory of Alaska to the United States. The decision was motivated in part by a need for money and in part a recognition amongst the Russian state that Britain could easily capture Alaska in any future conflict between the two nations. U.S. Secretary of State William Seward negotiated with the Russians to acquire the tremendous landmass of Alaska, an area roughly one-fifth the size of the rest of the United States. On March 30, 1867, the U.S. purchased the territory from the Russians for $7.2 million ($118 million in today's dollars). The transfer ceremony was completed in Sitka on October 18, 1867, as Russian soldiers handed over the territory to the United States Army. Critics at the time decried the purchase as "Seward's Folly", reasoning that there were no natural resources in the new territory and no one can be bothered to live in such a cold, icy climate. Although the development and settlement of Alaska grew slowly, the discovery of goldfields during the Klondike Gold Rush in 1896, Nome Gold Rush in 1898, and Fairbanks Gold Rush in 1902 brought thousands of miners into the territory, thus propelling Alaska's prosperity for decades to come. Major oil discoveries in the late 20th century made the state rich. Oklahoma Land Rush In 1889, Washington opened of unoccupied lands in the Oklahoma territory. On April 22, over 100,000 settlers and cattlemen (known as "boomers") lined up at the border, and when the army's guns and bugles giving the signal, began a mad dash to stake their claims in the Land Run of 1889. A witness wrote, "The horsemen had the best of it from the start. It was a fine race for a few minutes, but soon the riders began to spread out like a fan, and by the time they reached the horizon they were scattered about as far as the eye could see". In a single day, the towns of Oklahoma City, Norman, and Guthrie came into existence. In the same manner, millions of acres of additional land were opened up and settled in the following four years. Indian Wars Indian wars have occurred throughout the United States though the conflicts are generally separated into two categories; the Indian wars east of the Mississippi River and the Indian wars west of the Mississippi. The U.S. Bureau of the Census (1894) provided an estimate of deaths: Historian Russell Thornton estimates that from 1800 to 1890, the Indian population declined from 600,000 to as few as 250,000. The depopulation was principally caused by disease as well as warfare. Many tribes in Texas, such as the Karankawan, Akokisa, Bidui and others, were extinguished due to conflicts with Texan settlers. The rapid depopulation of the American Indians after the Civil War alarmed the U.S. government, and the Doolittle Committee was formed to investigate the causes as well as provide recommendations for preserving the population. The solutions presented by the committee, such as the establishment of the five boards of inspection to prevent Indian abuses, had little effect as large Western migration commenced. Indian wars east of the Mississippi The Trail of Tears The expansion of migration into the Southeastern United States in the 1820s to the 1830s forced the federal government to deal with the "Indian question". The Indians were under federal control but were independent of state governments. State legislatures and state judges had no authority on their lands, and the states demanded control. Politically the new Democratic Party of President Andrew Jackson demanded the removal of the Indians out of the southeastern states to new lands in the west, while the Whig Party and the Protestant churches were opposed to removal. The Jacksonian Democracy proved irresistible, as it won the presidential elections of 1828, 1832, and 1836. By 1837 the "Indian Removal policy" began, to implement the act of Congress signed by Andrew Jackson in 1830. Many historians have sharply attacked Jackson. The 1830 law theoretically provided for voluntary removal and had safeguards for the rights of Indians, but in reality, the removal was involuntary, brutal and ignored safeguards. Jackson justified his actions by stating that Indians had "neither the intelligence, the industry, the moral habits, nor the desire of improvements". The forced march of about twenty tribes included the "Five Civilized Tribes" (Creek, Choctaw, Cherokee, Chickasaw, and Seminole). To motivate Natives reluctant to move, the federal government also promised rifles, blankets, tobacco, and cash. By 1835 the Cherokee, the last Indian nation in the South, had signed the removal treaty and relocated to Oklahoma. All the tribes were given new land in the "Indian Territory" (which later became Oklahoma). Of the approximate 70,000 Indians removed, about 18,000 died from disease, starvation, and exposure on the route. This exodus has become known as The Trail of Tears (in Cherokee "Nunna dual Tsuny", "The Trail Where they Cried"). The impact of the removals was severe. The transplanted tribes had considerable difficulty adapting to their new surroundings and sometimes clashed with the tribes native to the area. The only way for an Indian to remain and avoid removal was to accept the federal offer of or more of land (depending on family size) in exchange for leaving the tribe and becoming a state citizen subject to state law and federal law. However, many Natives who took the offer were defrauded by "ravenous speculators" who stole their claims and sold their land to whites. In Mississippi alone, fraudulent claims reached . Of the five tribes, the Seminole offered the most resistance, hiding out in the Florida swamps and waging a war which cost the U.S. Army 1,500 lives and $20 million. Indian wars west of the Mississippi Indian warriors in the West, using their traditional style of limited, battle-oriented warfare, confronted the U.S. Army. The Indians emphasized bravery in combat while the Army put its emphasis not so much on individual combat as on building networks of forts, developing a logistics system, and using the telegraph and railroads to coordinate and concentrate its forces. Plains Indian intertribal warfare bore no resemblance to the "modern" warfare practiced by the Americans along European lines, using its vast advantages in population and resources. Many tribes avoided warfare and others supported the U.S. Army. The tribes hostile to the government continued to pursue their traditional brand of fighting and, therefore, were unable to have any permanent success against the Army. Indian wars were fought throughout the western regions, with more conflicts in the states bordering Mexico than in the interior states. Arizona ranked highest, with 310 known battles fought within the state's boundaries between Americans and the Natives. Arizona ranked highest in war deaths, with 4,340 killed, including soldiers, civilians, and Native Americans. That was more than twice as many as occurred in Texas, the second-highest-ranking state. Most of the deaths in Arizona were caused by the Apache. Michno also says that fifty-one percent of the Indian war battles between 1850 and 1890 took place in Arizona, Texas, and New Mexico, as well as thirty-seven percent of the casualties in the county west of the Mississippi River. One of the deadliest Indian wars fought was the Snake War in 1864–1868, which was conducted by a confederacy of Northern Paiute, Bannock and Shoshone Native Americans, called the "Snake Indians" against the United States Army in the states of Oregon, Nevada, California, and Idaho which ran along the Snake River. The war started when tension arose between the local Indians and the flooding pioneer trains encroaching through their lands, which resulted in competition for food and resources. Indians included in this group attacked and harassed emigrant parties and miners crossing the Snake River Valley, which resulted in further retaliation of the white settlements and the intervention of the United States army. The war resulted in a total of 1,762 men who have been killed, wounded, and captured from both sides. Unlike other Indian Wars, the Snake War has widely forgotten in United States history due to having only limited coverage of the war. The Colorado War fought by Cheyenne, Arapaho and Sioux, was fought in the territories of Colorado to Nebraska. The conflict was fought in 1863–1865 while the American Civil War was still ongoing. Caused by dissolution between the Natives and the white settlers in the region, the war was infamous for the atrocities done between the two parties. White militias destroyed Native villages and killed Indian women and children such as the bloody Sand Creek massacre, and the Indians also raided ranches, farms and killed white families such as the American Ranch massacre and Raid on Godfrey Ranch. In the Apache Wars, Colonel Christopher "Kit" Carson forced the Mescalero Apache onto a reservation in 1862. In 1863–1864, Carson used a scorched earth policy in the Navajo Campaign, burning Navajo fields and homes, and capturing or killing their livestock. He was aided by other Indian tribes with long-standing enmity toward the Navajos, chiefly the Utes. Another prominent conflict of this war was Geronimo's fight against settlements in Texas in the 1880s. The Apaches under his command conducted ambushes on US cavalries and forts, such as their attack on Cibecue Creek, while also raiding upon prominent farms and ranches, such as their infamous attack on the Empire Ranch that killed three cowboys. The U.S. finally induced the last hostile Apache band under Geronimo to surrender in 1886. During the Comanche Campaign, the Red River War was fought in 1874–75 in response to the Comanche's dwindling food supply of buffalo, as well as the refusal of a few bands to be inducted in reservations. Comanches started raiding small settlements in Texas, which led to the Battle of Buffalo Wallow and Second Battle of Adobe Walls fought by buffalo hunters, and the Battle of Lost Valley against the Texas Rangers. The war finally ended with a final confrontation between the Comanches and the U.S. Cavalry in Palo Duro Canyon. The last Comanche war chief, Quanah Parker, surrendered in June 1875, which would finally end the wars fought by Texans and Indians. Red Cloud's War was led by the Lakota chief Red Cloud against the military who were erecting forts along the Bozeman Trail. It was the most successful campaign against the U.S. during the Indian Wars. By the Treaty of Fort Laramie (1868), the U.S. granted a large reservation to the Lakota, without military presence; it included the entire Black Hills. Captain Jack was a chief of the Native American Modoc tribe of California and Oregon, and was their leader during the Modoc War. With 53 Modoc warriors, Captain Jack held off 1,000 men of the U.S. Army for 7 months. Captain Jack killed Edward Canby. In June 1877, in the Nez Perce War the Nez Perce under Chief Joseph, unwilling to give up their traditional lands and move to a reservation, undertook a 1,200-mile (2,000 km) fighting retreat from Oregon to near the Canada–US border in Montana. Numbering only 200 warriors, the Nez Perce "battled some 2,000 American regulars and volunteers of different military units, together with their Indian auxiliaries of many tribes, in a total of eighteen engagements, including four major battles and at least four fiercely contested skirmishes." The Nez Perce were finally surrounded at the Battle of Bear Paw and surrendered. The Great Sioux War of 1876 was conducted by the Lakota under Sitting Bull and Crazy Horse. The conflict began after repeated violations of the Treaty of Fort Laramie (1868) once gold was discovered in the hills. One of its famous battles was the Battle of the Little Bighorn, in which combined Sioux and Cheyenne forces defeated the 7th Cavalry, led by General George Armstrong Custer. The Ute War, fought by the Ute people against settlers in Utah and Colorado, led to two battles; the Meeker massacre which killed 11 Indian agents, and the Pinhook massacre which killed 13 armed ranchers and cowboys. The Ute conflicts finally ended after the events of the Posey War in 1923 which was fought against settlers and law enforcement. The end of the major Indian wars came at the Wounded Knee massacre on December 29, 1890, where the 7th Cavalry attempted to disarm a Sioux man and precipitated an engagement in which about 150 Sioux men, women, and children were killed. Only thirteen days before, Sitting Bull had been killed with his son Crow Foot in a gun battle with a group of Indian police that had been sent by the American government to arrest him. Additional conflicts and incidents though, such as the Bluff War (1914–1915) and Posey War, would occur into the early 1920s. The last combat engagement between U.S. Army soldiers and Native Americans though occurred in the Battle of Bear Valley on January 9, 1918. Forts and outposts As the frontier moved westward, the establishment of U.S. military forts moved with it, representing and maintaining federal sovereignty over new territories. The military garrisons usually lacked defensible walls but were seldom attacked. They served as bases for troops at or near strategic areas, particularly for counteracting the Indian presence. For example, Fort Bowie protected Apache Pass in southern Arizona along the mail route between Tucson and El Paso and was used to launch attacks against Cochise and Geronimo. Fort Laramie and Fort Kearny helped protect immigrants crossing the Great Plains and a series of posts in California protected miners. Forts were constructed to launch attacks against the Sioux. As Indian reservations sprang up, the military set up forts to protect them. Forts also guarded the Union Pacific and other rail lines. Other important forts were Fort Sill, Oklahoma, Fort Smith, Arkansas, Fort Snelling, Minnesota, Fort Union, New Mexico, Fort Worth, Texas, and Fort Walla Walla in Washington. Fort Omaha, Nebraska, was home to the Department of the Platte, and was responsible for outfitting most Western posts for more than 20 years after its founding in the late 1870s. Fort Huachuca in Arizona was also originally a frontier post and is still in use by the United States Army. Indian reservations Settlers on their way overland to Oregon and California became targets of Indian threats. Robert L. Munkres read 66 diaries of parties traveling the Oregon Trail between 1834 and 1860 to estimate the actual dangers they faced from Indian attacks in Nebraska and Wyoming. The vast majority of diarists reported no armed attacks at all. However many did report harassment by Indians who begged or demanded tolls, and stole horses and cattle. Madsen reports that the Shoshoni and Bannock tribes north and west of Utah were more aggressive toward wagon trains. The federal government attempted to reduce tensions and create new tribal boundaries in the Great Plains with two new treaties in early 1850, The Treaty of Fort Laramie established tribal zones for the Sioux, Cheyennes, Arapahos, Crows, and others, and allowed for the building of roads and posts across the tribal lands. A second treaty secured safe passage along the Santa Fe Trail for wagon trains. In return, the tribes would receive, for ten years, annual compensation for damages caused by migrants. The Kansas and Nebraska territories also became contentious areas as the federal government sought those lands for the future transcontinental railroad. In the Far West settlers began to occupy land in Oregon and California before the federal government secured title from the native tribes, causing considerable friction. In Utah, the Mormons also moved in before federal ownership was obtained. A new policy of establishing reservations came gradually into shape after the boundaries of the "Indian Territory" began to be ignored. In providing for Indian reservations, Congress and the Office of Indian Affairs hoped to de-tribalize Native Americans and prepare them for integration with the rest of American society, the "ultimate incorporation into the great body of our citizen population". This allowed for the development of dozens of riverfront towns along the Missouri River in the new Nebraska Territory, which was carved from the remainder of the Louisiana Purchase after the Kansas–Nebraska Act. Influential pioneer towns included Omaha, Nebraska City, and St. Joseph. American attitudes towards Indians during this period ranged from malevolence ("the only good Indian is a dead Indian") to misdirected humanitarianism (Indians live in "inferior" societies and by assimilation into white society they can be redeemed) to somewhat realistic (Native Americans and settlers could co-exist in separate but equal societies, dividing up the remaining western land). Dealing with nomadic tribes complicated the reservation strategy and decentralized tribal power made treaty making difficult among the Plains Indians. Conflicts erupted in the 1850s, resulting in various Indian wars. In these times of conflict, Indians become more stringent about white men entering their territory. Such as in the case of Oliver Loving, they would sometimes attack cowboys and their cattle if ever caught crossing in the borders of their land. They would also prey upon livestock if the food was scarce during hard times. However, the relationship between cowboys and Native Americans were more mutual than they are portrayed, and the former would occasionally pay a fine of 10 cents per cow for the latter to allow them to travel through their land. Indians also preyed upon stagecoaches travelling in the frontier for its horses and valuables. After the Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley: Social history Democratic society Westerners were proud of their leadership in the movement for democracy and equality, a major theme for Frederick Jackson Turner. The new states of Kentucky, Tennessee, Alabama, and Ohio were more democratic than the parent states back East in terms of politics and society. The Western states were the first to give women the right to vote. By 1900 the West, especially California and Oregon, led the Progressive movement. Scholars have examined the social history of the west in search of the American character. The history of Kansas, argued historian Carl L. Becker a century ago, reflects American ideals. He wrote: "The Kansas spirit is the American spirit double distilled. It is a new grafted product of American individualism, American idealism, American intolerance. Kansas is America in microcosm." Scholars have compared the emergence of democracy in America with other countries, regarding the frontier experience. Selwyn Troen has made the comparison with Israel. The American frontiersmen relied on individual effort, in the context of very large quantities of unsettled land with weak external enemies. Israel by contrast, operated in a very small geographical zone, surrounded by more powerful neighbors. The Jewish pioneer was not building an individual or family enterprise, but was a conscious participant in nation-building, with a high priority on collective and cooperative planned settlements. The Israeli pioneers brought in American experts on irrigation and agriculture to provide technical advice. However, they rejected the American frontier model in favor of a European model that supported their political and security concerns. Urban frontier The cities played an essential role in the development of the frontier, as transportation hubs, financial and communications centers, and providers of merchandise, services, and entertainment. As the railroads pushed westward into the unsettled territory after 1860, they build service towns to handle the needs of railroad construction crews, train crews, and passengers who ate meals at scheduled stops. In most of the South, there were very few cities of any size for miles around, and this pattern held for Texas as well, so railroads did not arrive until the 1880s. They then shipped the cattle out and cattle drives became short-distance affairs. However, the passenger trains were often the targets of armed gangs. Denver's economy before 1870 had been rooted in mining; it then grew by expanding its role in railroads, wholesale trade, manufacturing, food processing, and servicing the growing agricultural and ranching hinterland. Between 1870 and 1890, manufacturing output soared from $600,000 to $40 million, and the population grew by a factor of 20 times to 107,000. Denver had always attracted miners, workers, whores, and travelers. Saloons and gambling dens sprung up overnight. The city fathers boasted of its fine theaters, and especially the Tabor Grand Opera House built in 1881. By 1890, Denver had grown to be the 26th largest city in America, and the fifth-largest city west of the Mississippi River. The boom times attracted millionaires and their mansions, as well as hustlers, poverty, and crime. Denver gained regional notoriety with its range of bawdy houses, from the sumptuous quarters of renowned madams to the squalid "cribs" located a few blocks away. Business was good; visitors spent lavishly, then left town. As long as madams conducted their business discreetly, and "crib girls" did not advertise their availability too crudely, authorities took their bribes and looked the other way. Occasional cleanups and crack downs satisfied the demands for reform. With its giant mountain of copper, Butte, Montana, was the largest, richest, and rowdiest mining camp on the frontier. It was an ethnic stronghold, with the Irish Catholics in control of politics and of the best jobs at the leading mining corporation Anaconda Copper. City boosters opened a public library in 1894. Ring argues that the library was originally a mechanism of social control, "an antidote to the miners' proclivity for drinking, whoring, and gambling". It was also designed to promote middle-class values and to convince Easterners that Butte was a cultivated city. Race and ethnicity European immigrants European immigrants often built communities of similar religious and ethnic backgrounds. For example, many Finns went to Minnesota and Michigan, Swedes and Norwegians to Minnesota and the Dakotas, Irish to railroad centers along the transcontinental lines, Volga Germans to North Dakota, and German Jews to Portland, Oregon. African-Americans African Americans moved West as soldiers, as well as cowboys, farmhands, saloon workers, cooks, and outlaws. The Buffalo Soldiers were soldiers in the all-black 9th and 10th Cavalry regiments, and 24th and 25th Infantry Regiments of the U.S. Army. They had white officers and served in numerous western forts. About 4,000 black people came to California in Gold Rush days. In 1879, after the end of Reconstruction in the South, several thousand Freedmen moved from Southern states to Kansas. Known as the Exodusters, they were lured by the prospect of good, cheap Homestead Law land and better treatment. The all-black town of Nicodemus, Kansas, which was founded in 1877, was an organized settlement that predates the Exodusters but is often associated with them. Asians The California Gold Rush included thousands of Mexican and Chinese arrivals. Chinese migrants, many of whom were impoverished peasants, provided the major part of the workforce for the building of the Central Pacific portion of the transcontinental railroad. Most of them went home by 1870 when the railroad was finished. Those who stayed on worked in mining, agriculture, and opened small shops such as groceries, laundries, and restaurants. Hostility against the Chinese remained high in the western states/territories as seen by the Chinese Massacre Cove episode and the Rock Springs massacre. The Chinese were generally forced into self-sufficient "Chinatowns" in cities such as San Francisco, Portland, and Seattle. In Los Angeles, the last major anti-Chinese riot took place in 1871, after which local law enforcement grew stronger. In the late 19th century, Chinatowns were squalid slums known for their vice, prostitution, drugs, and violent battles between "tongs". By the 1930s, however, Chinatowns had become clean, safe and attractive tourist destinations. The first Japanese arrived in the U.S. in 1869, with the arrival of 22 people from samurai families, settling in Placer County, California, to establish the Wakamatsu Tea and Silk Farm Colony. Japanese were recruited to work on plantations in Hawaii, beginning in 1885. By the late 19th Century, more Japanese emigrated to Hawaii and the American mainland. The Issei, or first-generation Japanese immigrants, were not allowed to become U.S. citizens because they were not "a free white person", per the United States Naturalization Law of 1790. This did not change until the passage of the Immigration and Nationality Act of 1952, known as the McCarran-Walter Act, which allowed Japanese immigrants to become naturalized U.S. citizens. By 1920, Japanese-American farmers produced US$67 million worth of crops, more than ten percent of California's total crop value. There were 111,000 Japanese Americans in the U.S., of which 82,000 were immigrants and 29,000 were U.S. born. Congress passed the Immigration Act of 1924 effectively ending all Japanese immigration to the U.S. The U.S.-born children of the Issei were citizens, in accordance to the 14th Amendment to the United States Constitution. Hispanics The great majority of Hispanics who had been living in the former territories of New Spain remained and became American citizens in 1848 but their properties were stolen by American colonists (source?). The 10,000 or so Californios lived in southern California and after 1880 were overshadowed by the hundreds of thousands of arrivals from the east. Those in New Mexico dominated towns and villages that changed little until well into the 20th century. New arrivals from Mexico arrived, especially after the Revolution of 1911 terrorized thousands of villages all across Mexico. Most refugees went to Texas or California, and soon poor barrios appeared in many border towns. The California "Robin Hood", Joaquin Murrieta, led a gang in the 1850s which burned houses, killed exploiter miners, robbed stagecoaches of landowners and fighted against violence and discrimination over Latin Americans. In Texas, Juan Cortina led a 20-year campaign against Anglos and the Texas Rangers, starting around 1859. Family life On the Great Plains very few single men attempted to operate a farm or ranch; farmers clearly understood the need for a hard-working wife, and numerous children, to handle the many chores, including child-rearing, feeding, and clothing the family, managing the housework, and feeding the hired hands. During the early years of settlement, farm women played an integral role in assuring family survival by working outdoors. After a generation or so, women increasingly left the fields, thus redefining their roles within the family. New conveniences such as sewing and washing machines encouraged women to turn to domestic roles. The scientific housekeeping movement, promoted across the land by the media and government extension agents, as well as county fairs which featured achievements in home cookery and canning, advice columns for women in the farm papers, and home economics courses in the schools all contributed to this trend. Although the eastern image of farm life on the prairies emphasizes the isolation of the lonely farmer and farm life, in reality, rural folk created a rich social life for themselves. They often sponsored activities that combined work, food, and entertainment such as barn raisings, corn huskings, quilting bees, Grange meetings, church activities, and school functions. The womenfolk organized shared meals and potluck events, as well as extended visits between families. Childhood Childhood on the American frontier is contested territory. One group of scholars, following the lead of novelists Willa Cather and Laura Ingalls Wilder, argue the rural environment was beneficial to the child's upbringing. Historians Katherine Harris and Elliott West write that rural upbringing allowed children to break loose from urban hierarchies of age and gender, promoted family interdependence, and at the end produced children who were more self-reliant, mobile, adaptable, responsible, independent and more in touch with nature than their urban or eastern counterparts. On the other hand, historians Elizabeth Hampsten and Lillian Schlissel offer a grim portrait of loneliness, privation, abuse, and demanding physical labor from an early age. Riney-Kehrberg takes a middle position. Prostitution and gambling Entrepreneurs set up shops and businesses to cater to the miners. World-famous were the houses of prostitution found in every mining camp worldwide. Prostitution was a growth industry attracting sex workers from around the globe, pulled in by the money, despite the harsh and dangerous working conditions and low prestige. Chinese women were frequently sold by their families and taken to the camps as prostitutes; they had to send their earnings back to the family in China. In Virginia City, Nevada, a prostitute, Julia Bulette, was one of the few who achieved "respectable" status. She nursed victims of an influenza epidemic; this gave her acceptance in the community and the support of the sheriff. The townspeople were shocked when she was murdered in 1867; they gave her a lavish funeral and speedily tried and hanged her assailant. Until the 1890s, madams predominantly ran the businesses, after which male pimps took over, and the treatment of the women generally declined. It was not uncommon for bordellos in Western towns to operate openly, without the stigma of East Coast cities. Gambling and prostitution were central to life in these western towns, and only later—as the female population increased, reformers moved in, and other civilizing influences arrived—did prostitution become less blatant and less common. After a decade or so the mining towns attracted respectable women who ran boarding houses, organized church societies, worked as laundresses and seamstresses and strove for independent status. Whenever a new settlement or mining camp started one of the first buildings or tents erected would be a gambling hall. As the population grew, gambling halls were typically the largest and most ornately decorated buildings in any town and often housed a bar, stage for entertainment, and hotel rooms for guests. These establishments were a driving force behind the local economy and many towns measured their prosperity by the number of gambling halls and professional gamblers they had. Towns that were friendly to gambling were typically known to sports as "wide-awake" or "wide-open". Cattle towns in Texas, Oklahoma, Kansas, and Nebraska became famous centers of gambling. The cowboys had been accumulating their wages and postponing their pleasures until they finally arrived in town with money to wager. Abilene, Dodge City, Wichita, Omaha, and Kansas City all had an atmosphere that was convivial to gaming. Such an atmosphere also invited trouble and such towns also developed reputations as lawless and dangerous places. Law and order Historian Waddy W. Moore uses court records to show that on the sparsely settled Arkansas frontier lawlessness was common. He distinguished two types of crimes: unprofessional (dueling, crimes of drunkenness, selling whiskey to the Indians, cutting trees on federal land) and professional (rustling, highway robbery, counterfeiting). Criminals found many opportunities to rob pioneer families of their possessions, while the few underfunded lawmen had great difficulty detecting, arresting, holding, and convicting wrongdoers. Bandits, typically in groups of two or three, rarely attacked stagecoaches with a guard carrying a sawed-off, double-barreled shotgun; it proved less risky to rob teamsters, people on foot, and solitary horsemen, while bank robberies themselves were harder to pull off due to the security of the establishment. According to historian Brian Robb, the earliest form of organized crime in America was born from the gangs of the Old West. When criminals were convicted, the punishment was severe. Aside from the occasional Western sheriff and Marshal, there were other various law enforcement agencies throughout the American frontier, such as the Texas Rangers. These lawmen were not just instrumental in keeping the peace, but also in protecting the locals from Indian and Mexican threats at the border. Law enforcement tended to be more stringent in towns than in rural areas. Law enforcement emphasized maintaining stability more than armed combat, focusing on drunkenness, disarming cowboys who violated gun-control edicts and dealing with flagrant breaches of gambling and prostitution ordinances. Dykstra argues that the violent image of the cattle towns in film and fiction is largely a myth. The real Dodge City, he says, was the headquarters for the buffalo-hide trade of the Southern Plains and one of the West's principal cattle towns, a sale and shipping point for cattle arriving from Texas. He states there is a "second Dodge City" that belongs to the popular imagination and thrives as a cultural metaphor for violence, chaos, and depravity. For the cowboy arriving with money in hand after two months on the trail, the town was exciting. A contemporary eyewitness of Hays City, Kansas, paints a vivid image of this cattle town: It has been acknowledged that the popular portrayal of Dodge City in film and fiction carries a note of truth, however, as gun crime was rampant in the city before the establishment of a local government. Soon after the city's residents officially established their first municipal government, however, a law banning concealed firearms was enacted and crime was reduced soon afterward. Similar laws were passed in other frontier towns to reduce the rate of gun crime as well. As UCLA law professor Adam Wrinkler noted: Tombstone, Arizona, was a turbulent mining town that flourished longer than most, from 1877 to 1929. Silver was discovered in 1877, and by 1881 the town had a population of over 10,000. In 1879 the newly arrived Earp brothers bought shares in the Vizina mine, water rights, and gambling concessions, but Virgil, Wyatt, and Morgan Earp obtained positions at different times as federal and local lawmen. After more than a year of threats and feuding, they, along with Doc Holliday, killed three outlaws in the Gunfight at the O.K. Corral, the most famous gunfight of the Old West. In the aftermath, Virgil Earp was maimed in an ambush, and Morgan Earp was assassinated while playing billiards. Wyatt and others, including his brothers James Earp and Warren Earp, pursued those they believed responsible in an extra-legal vendetta and warrants were issued for their arrest in the murder of Frank Stilwell. The Cochise County Cowboys were one of the first organized crime syndicates in the United States, and their demise came at the hands of Wyatt Earp. Western story tellers and film makers featured the gunfight in many Western productions. Walter Noble Burns's novel Tombstone (1927) made Earp famous. Hollywood celebrated Earp's Tombstone days with John Ford's My Darling Clementine (1946), John Sturges's Gunfight at the O.K. Corral (1957) and Hour of the Gun (1967), Frank Perry's Doc (1971), George Cosmatos's Tombstone (1993), and Lawrence Kasdan's Wyatt Earp (1994). They solidified Earp's modern reputation as the Old West's deadliest gunman. Banditry The major type of banditry was conducted by the infamous outlaws of the West, including Jesse James, Billy the Kid, the Dalton Gang, Black Bart, Butch Cassidy, Sundance Kid and the Butch Cassidy's Wild Bunch and hundreds of others who preyed on banks, trains, stagecoaches, and in some cases even armed government transports such as the Wham Paymaster Robbery and the Skeleton Canyon Robbery. Some of the outlaws, such as Jesse James, were products of the violence of the Civil War (James had ridden with Quantrill's Raiders) and others became outlaws during hard times in the cattle industry. Many were misfits and drifters who roamed the West avoiding the law. In rural areas Joaquin Murieta, Jack Powers, Augustine Chacon and other bandits terrorized the state. When outlaw gangs were near, towns would occasionally raise a posse to drive them out or capture them. Seeing that the need to combat the bandits was a growing business opportunity, Allan Pinkerton ordered his National Detective Agency, founded in 1850, to open branches in the West, and they got into the business of pursuing and capturing outlaws. There was plenty of business thanks to the criminals such as the James Gang, Butch Cassidy, Sam Bass, and dozens of others. To take refuge from the law, outlaws would use the advantages of the open range, remote passes and badlands to hide. While some settlements and towns in the frontier also house outlaws and criminals, which were called "outlaw towns". Banditry was a major issue in California after 1849, as thousands of young men detached from family or community moved into a land with few law enforcement mechanisms. To combat this, the San Francisco Committee of Vigilance was established to give drumhead trials and death sentences to well-known offenders. As such, other earlier settlements created their private agencies to protect communities due to the lack of peace-keeping establishments. These vigilance committees reflected different occupations in the frontier, such as land clubs, cattlemen's associations and mining camps. Similar vigilance committees also existed in Texas, and their main objective was to stamp out lawlessness and rid communities of desperadoes and rustlers. These committees would sometimes form mob rule for private vigilante groups, but usually were made up of responsible citizens who wanted only to maintain order. Criminals caught by these vigilance committees were treated cruelly; often hung or shot without any form of trial. Civilians also took arms to defend themselves in the Old West, sometimes siding with lawmen (Coffeyville Bank Robbery), or siding with outlaws (Battle of Ingalls). In the Post-Civil War frontier, over 523 whites, 34 blacks, and 75 others were victims of lynching. However, cases of lynching in the Old West wasn't primarily caused by the absence of a legal system, but also because of social class. Historian Michael J. Pfeifer writes, "Contrary to the popular understanding, early territorial lynching did not flow from an absence or distance of law enforcement but rather from the social instability of early communities and their contest for property, status, and the definition of social order." Gunfights and feuds The names and exploits of Western gunslingers took a major role in American folklore, fiction and film. Their guns and costumes became children's toys for make-believe shootouts. The stories became immensely popular in Germany and other European countries, which produced their novels and films about the American frontier. The image of a Wild West filled with countless gunfights was a myth based on repeated exaggerations. The most notable and well-known took place in Arizona, New Mexico, Kansas, Oklahoma, and Texas. Actual gunfights in the Old West were more episodic than being a common thing, but when gunfights did occur, the cause for each varied. Some were simply the result of the heat of the moment, while others were longstanding feuds, or between bandits and lawmen. Although mostly romanticized, there were instances of "quick draw" that did occur though rarely, such as Wild Bill Hickok – Davis Tutt shootout and Luke Short-Jim Courtright Duel. Fatal duels were fought to uphold personal honor in the West. To prevent gunfights, towns such as Dodge City and Tombstone prohibited firearms in town. Range wars were infamous armed conflicts that took place in the "open range" of the American frontier. The subject of these conflicts was the control of lands freely used for farming and cattle grazing which gave the conflict its name. Range wars became more common by the end of the American Civil War, and numerous conflicts were fought such as the Pleasant Valley War, Mason County War, Johnson County War, Colorado Range War, Fence Cutting War, Colfax County War, Castaic Range War,Spring Creek raid, Barber–Mizell feud, San Elizario Salt War and others. During a range war in Montana, a vigilante group called Stuart's Stranglers, which were made up of cattlemen and cowboys, killed up to 20 criminals and range squatters in 1884 alone. In Nebraska, stock grower Isom Olive led a range war in 1878 that killed a number of homesteaders from lynchings and shootouts before eventually leading to his own murder. Another infamous type of open range conflict were the Sheep Wars, which were fought between sheep ranchers and cattle ranchers over grazing rights and mainly occurred in Texas, Arizona and the border region of Wyoming and Colorado. In most cases, formal military involvement were used to quickly put an end to these conflicts. Other conflicts over land and territory were also fought such as the Regulator–Moderator War, Cortina Troubles, Las Cuevas War and the Bandit War. Feuds involving families and bloodlines also occurred much in the frontier. Since private agencies and vigilance committees were the substitute for proper courts, many families initially depended on themselves and their communities for their security and justice. These wars include the Lincoln County War, Tutt–Everett War, Flynn–Doran feud, Early–Hasley feud, Brooks-Baxter War, Sutton–Taylor feud, Horrell Brothers feud, Brooks–McFarland Feud, Reese–Townsend feud and the Earp Vendetta Ride. Cattle The end of the bison herds opened up millions of acres for cattle ranching. Spanish cattlemen had introduced cattle ranching and longhorn cattle to the Southwest in the 17th century, and the men who worked the ranches, called "vaqueros", were the first "cowboys" in the West. After the Civil War, Texas ranchers raised large herds of longhorn cattle. The nearest railheads were 800 or more miles (1300+ km) north in Kansas (Abilene, Kansas City, Dodge City, and Wichita). So once fattened, the ranchers and their cowboys drove the herds north along the Western, Chisholm, and Shawnee trails. The cattle were shipped to Chicago, St. Louis, and points east for slaughter and consumption in the fast-growing cities. The Chisholm Trail, laid out by cattleman Joseph McCoy along an old trail marked by Jesse Chisholm, was the major artery of cattle commerce, carrying over 1.5 million head of cattle between 1867 and 1871 over the from south Texas to Abilene, Kansas. The long drives were treacherous, especially crossing water such as the Brazos and the Red River and when they had to fend off Indians and rustlers looking to make off with their cattle. A typical drive would take three to four months and contained two miles (3 km) of cattle six abreast. Despite the risks, a successful drive proved very profitable to everyone involved, as the price of one steer was $4 in Texas and $40 in the East. By the 1870s and 1880s, cattle ranches expanded further north into new grazing grounds and replaced the bison herds in Wyoming, Montana, Colorado, Nebraska, and the Dakota territory, using the rails to ship to both coasts. Many of the largest ranches were owned by Scottish and English financiers. The single largest cattle ranch in the entire West was owned by American John W. Iliff, "cattle king of the Plains", operating in Colorado and Wyoming. Gradually, longhorns were replaced by the British breeds of Hereford and Angus, introduced by settlers from the Northwest. Though less hardy and more disease-prone, these breeds produced better-tasting beef and matured faster. The funding for the cattle industry came largely from British sources, as the European investors engaged in a speculative extravaganza—a "bubble". Graham concludes the mania was founded on genuine opportunity, as well as "exaggeration, gullibility, inadequate communications, dishonesty, and incompetence". A severe winter engulfed the plains toward the end of 1886 and well into 1887, locking the prairie grass under ice and crusted snow which starving herds could not penetrate. The British lost most of their money—as did eastern investors like Theodore Roosevelt, but their investments did create a large industry that continues to cycle through boom and bust periods. On a much smaller scale, sheep grazing was locally popular; sheep were easier to feed and needed less water. However, Americans did not eat mutton. As farmers moved in open range cattle ranching came to an end and was replaced by barbed wire spreads where water, breeding, feeding, and grazing could be controlled. This led to "fence wars" which erupted over disputes about water rights. Cowboys Central to the myth and the reality of the West is the American cowboy. His real life was a hard one and revolved around two annual roundups, spring and fall, the subsequent drives to market, and the time off in the cattle towns spending his hard-earned money on food, clothing, gambling, and prostitution. During winter, many cowboys hired themselves out to ranches near the cattle towns, where they repaired and maintained equipment and buildings. Working the cattle was not just a routine job but also a lifestyle that exulted in the freedom of the wide unsettled outdoors on horseback. Long drives hired one cowboy for about 250 head of cattle. Saloons were ubiquitous (outside Mormondom), but on the trail, the cowboys were forbidden to drink alcohol. Often, hired cowboys were trained and knowledgeable in their trade such as herding, ranching and protecting cattle. To protect their herd from wild animals, hostile Indians and rustlers, cowboys carried with them their iconic weaponry such as the Bowie knife, lasso, bullwhip, pistols, rifles and shotguns. Many of the cowboys were veterans of the Civil War; a diverse group, they included Blacks, Hispanics, Native Americans, and immigrants from many lands. The earliest cowboys in Texas learned their trade, adapted their clothing, and took their jargon from the Mexican vaqueros or "buckaroos", the heirs of Spanish cattlemen from the middle-south of Spain. Chaps, the heavy protective leather trousers worn by cowboys, got their name from the Spanish "chaparreras", and the lariat, or rope, was derived from "la reata". All the distinct clothing of the cowboy—boots, saddles, hats, pants, chaps, slickers, bandannas, gloves, and collar-less shirts—were practical and adaptable, designed for protection and comfort. The cowboy hat quickly developed the capability, even in the early years, to identify its wearer as someone associated with the West; it came to symbolize the frontier. The most enduring fashion adapted from the cowboy, popular nearly worldwide today, are "blue jeans", originally made by Levi Strauss for miners in 1850. Before a drive, a cowboy's duties included riding out on the range and bringing together the scattered cattle. The best cattle would be selected, roped, and branded, and most male cattle were castrated. The cattle also needed to be dehorned and examined and treated for infections. On the long drives, the cowboys had to keep the cattle moving and in line. The cattle had to be watched day and night as they were prone to stampedes and straying. While camping every night, cowboys would often sing to their herd to keep them calm. The workdays often lasted fourteen hours, with just six hours of sleep. It was grueling, dusty work, with just a few minutes of relaxation before and at the end of a long day. On the trail, drinking, gambling, and brawling were often prohibited and fined, and sometimes cursing as well. It was monotonous and boring work, with food to match: bacon, beans, bread, coffee, dried fruit, and potatoes. On average, cowboys earned $30 to $40 per month, because of the heavy physical and emotional toll, it was unusual for a cowboy to spend more than seven years on the range. As open range ranching and the long drives gave way to fenced-in ranches in the 1880s, by the 1890s the glory days of the cowboy came to an end, and the myths about the "free-living" cowboy began to emerge. Cowtowns Anchoring the booming cattle industry of the 1860s and 1870s were the cattle towns in Kansas and Missouri. Like the mining towns in California and Nevada, cattle towns such as Abilene, Dodge City, and Ellsworth experienced a short period of boom and bust lasting about five years. The cattle towns would spring up as land speculators would rush in ahead of a proposed rail line and build a town and the supporting services attractive to the cattlemen and the cowboys. If the railroads complied, the new grazing ground and supporting town would secure the cattle trade. However, unlike the mining towns which in many cases became ghost towns and ceased to exist after the ore played out, cattle towns often evolved from cattle to farming and continued after the grazing lands were exhausted. Conservation and environmentalism The concern with the protection of the environment became a new issue in the late 19th century, pitting different interests. On the one side were the lumber and coal companies who called for maximum exploitation of natural resources to maximize jobs, economic growth, and their own profit. In the center were the conservationists, led by Theodore Roosevelt and his coalition of outdoorsmen, sportsmen, bird watchers, and scientists. They wanted to reduce waste; emphasized the value of natural beauty for tourism and ample wildlife for hunters; and argued that careful management would not only enhance these goals but also increase the long-term economic benefits to society by planned harvesting and environmental protections. Roosevelt worked his entire career to put the issue high on the national agenda. He was deeply committed to conserving natural resources. He worked closely with Gifford Pinchot and used the Newlands Reclamation Act of 1902 to promote federal construction of dams to irrigate small farms and placed 230 million acres (360,000 mi or 930,000 km) under federal protection. Roosevelt set aside more Federal land, national parks, and nature preserves than all of his predecessors combined. Roosevelt explained his position in 1910: The third element, smallest at first but growing rapidly after 1870, were the environmentalists who honored nature for its own sake, and rejected the goal of maximizing human benefits. Their leader was John Muir (1838–1914), a widely read author and naturalist and pioneer advocate of preservation of wilderness for its own sake, and founder of the Sierra Club. Muir, based in California, in 1889 started organizing support to preserve the sequoias in the Yosemite Valley; Congress did pass the Yosemite National Park bill (189O). In 1897 President Grover Cleveland created thirteen protected forests but lumber interests had Congress cancel the move. Muir, taking the persona of an Old Testament prophet, crusaded against the lumberman, portraying it as a contest "between landscape righteousness and the devil". A master publicist, Muir's magazine articles, in Harper's Weekly (June 5, 1897) and the Atlantic Monthly turned the tide of public sentiment. He mobilized public opinion to support Roosevelt's program of setting aside national monuments, national forest reserves, and national parks. However, Muir broke with Roosevelt and especially President William Howard Taft on the Hetch Hetchy dam, which was built in the Yosemite National Park to supply water to San Francisco. Biographer Donald Worster says, "Saving the American soul from a total surrender to materialism was the cause for which he fought." Buffalo The rise of the cattle industry and the cowboy is directly tied to the demise of the huge herds of bison—usually called the "buffalo". Once numbering over 25 million on the Great Plains, the grass-eating herds were a vital resource animal for the Plains Indians, providing food, hides for clothing and shelter, and bones for implements. Loss of habitat, disease, and over-hunting steadily reduced the herds through the 19th century to the point of near extinction. The last 10–15 million died out in a decade 1872–1883; only 100 survived. The tribes that depended on the buffalo had little choice but to accept the government offer of reservations, where the government would feed and supply them. Conservationists founded the American Bison Society in 1905; it lobbied Congress to establish public bison herds. Several national parks in the U.S. and Canada were created, in part to provide a sanctuary for bison and other large wildlife. The bison population reached 500,000 by 2003. End of the frontier Following the 1890 U.S. Census, the superintendent announced that there was no longer a clear line of advancing settlement, and hence no longer a contiguous frontier in the continental United States. When examining the later 1900 U.S. Census population distribution results though, the contiguous frontier line does remain. But by the 1910 U.S. Census, only pockets of the frontier remain without a clear westward line, allowing travel across the continent without ever crossing a frontier line. Virgin farmland was increasingly hard to find after 1890—although the railroads advertised some in eastern Montana. Bicha shows that nearly 600,000 American farmers sought cheap land by moving to the Prairie frontier of the Canadian West from 1897 to 1914. However, about two-thirds of them grew disillusioned and returned to the U.S. The Homestead Acts and proliferation of railroads are often credited as being important factors in shrinking the frontier, by efficiently bringing in settlers and required infrastructure. Barbed wire is also reasoned to reduce the traditional open range. In addition, the eventual adoption of automobiles and their required network of adequate roads solidified the frontier's end. The admission of Oklahoma as a state in 1907 upon the combination of the Oklahoma Territory and the last remaining Indian Territory, and the Arizona and New Mexico territories as states in 1912, marks the end of the frontier story for many scholars. Due to their low and uneven populations during this period though, frontier territory remained for the meantime. Of course, a few typical frontier episodes still happened such as the last stagecoach robbery occurred in Nevada's remaining frontier in December 1916. The Mexican Revolution also lead to significant conflict reaching across the US-Mexico border which was still mostly within frontier territory, known as the Mexican Border War (1910-1919). Flashpoints included the Battle of Columbus (1916) and the Punitive Expedition (1916–17). The Bandit War (1915-1919) involved attacks targeted against Texan settlers. Also, a few minor fights involving Indians happened as late as the Bluff War (1914–1915) and the Posey War (1923). Alaska was not admitted as a state until 1959. The ethos and storyline of the "American frontier" had passed. The People Of The American Frontier The Miners In 1849, James W. Marshall was building a sawmill on the riverside of the American River when he noticed metal flakes under the Tailrace. He quickly recognized the flakes to be gold. However, the sawmill he was building was not his, meaning that when he finished building the sawmill, his client John Sutter would also notice. So word quickly spread of there potentially being gold in the American River. So everyone packed the essentials, hopped on their wagons, and headed down the Chilkoot Trail in order to strike it rich. This was the start of the California Gold Rush. The California Gold Rush was both good and bad to America at the time. It simultaneously increased the population of California to almost 100,000 people, which helped with the Modernization of California. But also brought down the population of other states. Their employment rate took a hit as well, as people were quitting their jobs so they could embark on their journeys. The California Gold Rush finally came to an end in 1855. The extraction of gold from the river was done by dust panning; with most dust panning was normally done by prospectors. Dust panning is when someone dunks a pan into the water to collect it. Then they pour the water back into the river. Since water is a liquid that is lighter than gold, it pours out quicker. means it leaves all the gold and other metals behind. Making it easy for someone to pick up the gold from there. Even though this happened before the Wild West era, it was one of the biggest reasons why the Wild West was a thing in the first place. But even after the California Gold Rush, a miner was still a very common job to have. Most mountainside towns likely had a mineshaft. Most miners were poor, as mining was a very labour-intensive job. Miners would use pickaxes in order to mine into the mountains. The mine shafts that were dug into were poorly upkept. Being damp, poorly lit, and complex, it made it easy to get lost in the tunnels. The things miners collected were: gold, zinc, copper, and asbestos. These metals were sold to shopkeepers and rich people for currency. Miners were paid a salary of $1.70 per day. The Women The women of the Wild West were treated very poorly. Women were seen as the helpers for the children, so most of the occupations that were available to them involved handling children. One of these jobs was a teacher, which requires tertiary education. Women were allowed to attend universities to study teaching. Some women also worked in predominantly male positions; there were cowgirls, female business owners, female gunslingers and female bounty hunters. Women had less lawful protection compared to men. They also were not allowed in saloons, and their husbands were able to control almost every corner of their lives. The Soldiers The soldiers of the Wild West were very helpful; but most people do not even know about the existence of Wild Western soldiers. They built shelter, helped the emigrants, and guarded government and private property. They were usually peaceful, but a force to be reckoned with. They had access to most rifles in the country. There were not a lot of soldiers, because the Civil War was devastating the amount of soldiers, going from an army of 54,000 men, down to only 25,000 men. The Railroaders The railroaders were responsible for the construction of railroads. Most of the railroad workers were Irish immigrants who needed a job. They were obligated to work; they were living off a dollar a day, but their job is what kept themselves fed. The big thing that the railroaders are responsible for is the construction of the Transcontinental Railroad. The massive project took six years to complete. The work environment was awful; they were exposed to toxic chemicals, sometimes worked around mountain sides that they frequently fell off of, and frequently worked around high voltage electricity and moving trains. They also sometimes had to use expired dynamite to blast holes into mountains in order to create tunnels. But expired dynamite is unpredictable, meaning it frequently explodes early, which left permanent damage to a few unlucky individuals. The Loggers A logger was a labor-intensive occupation. The job was a fairly common occupation to have in this era, similarly to miners and railroad workers, many people pursued these careers, but was ultimately very dangerous. Loggers were paid more than both miners and railroaders combined, making $3.20 every day. To cut down trees, lumberjacks had many tools to help them in the process. To cut down trees, they would send multiple loggers depending on the size of the tree. From there, they would use double-sided axes to chop the base of the tree. After the tree collapsed, if the tree was too big to chop with the double-sided axes, they would use a gigantic saw called a crosscut. These saws could be over 12 feet in length. And for transportation, they would use a high-wheel loader to lift the massive logs that were strapped together using rope. Another rope was tied to oxen, then the oxen would pull the logs to wherever they needed to be. The Indians The Indians were the settlers of North America before Leif Erikson discovered the Americas for the Old World in 1001CE. Nevertheless, he started attacking and killing Native Americans. Influencing the other 35 men he was with to follow his lead. This caused a bandwagon effect, which lasted centuries, and continued into the Wild West era. The people of the Wild West era were even harsher to the Native Americans than past generations. The reason so many natives died in this time period was from epidemics, discrimination, and further public access to weaponry. Over the 55-year period, over 10 million Native Americans were slaughtered by the end of the era. Which sources claim to be Genocide of the Native Americans. The Chieftains The Chiefs were the leaders of the Native American tribes, the Chieftains were responsible for emboldening the other tribe members, and the creation of welfare programs for everyone. However, the formation of the first Indian reservations significantly reduced the power of chieftains, as they were now having to work with lifelong enemies in order to survive. The Frontiersmen The frontiersmen were the explorers of the Old West. In 1803, Thomas Jefferson closed the deal of the Louisiana Purchase for 15 million dollars. With the 828,000 square miles of gained territory. He sent Meriwether Lewis, and William Clark along with 45 other men to go explore the new territory. Their expedition across the Western United States turned into the famous Lewis and Clark Expedition There were many dangers on the trail; They had to cross rivers, people suffered injuries, epidemics, famine, and hostile Native American tribes. The Lewis and Clark expedition did take place before the Wild West era; But it was a huge event in United States history, and was one of the main reasons the Wild West era began. Besides just Lewis and Clark, the Wild West era brought many other frontiersmen. Frontiersmen were very self-sufficient compared to normal townspeople. They cleared their own land, they built their own shelter, and they made their own food. Their nomadic lifestyle was hurtful for America's economy Their unemployment made it difficult for more money to go into circulation, and stores were going bankrupt from a lack of customers. This also caused disputed territories against the Native Americans. For example; Charles Bent's arrival into Colorado caused the Pueblo Revolt. Bent shortly died from an assault from multiple Pueblo warriors. Acculturated Places The Spanish West In 1848, When America won the Mexican–American War, America gained 7 new territories: California, Arizona, New Mexico, Texas, Colorado, Nevada and Utah. This was the biggest cause of the Wild West era. When lots of people relocated to the underdeveloped Badlands; a pure culture was developed within Western America. Also, the Mexican State of Sonora's culture was acculturated to the Wild West. The Canadians On June 13, 1898, the Yukon Territory act made the Yukon Territory its own territory. Trails to its capital, Dawson City; which gave prospectors access to Gold mines. causing the Klondike Gold Rush. The Klondike Trail was a dangerous place; many wild animals attacked the Alpinists, and contagious diseases spread throughout the trail. In total, over 1,000 died on the trail from various causes. American frontier in popular culture The exploration, settlement, exploitation, and conflicts of the "American Old West" form a unique tapestry of events, which has been celebrated by Americans and foreigners alike—in art, music, dance, novels, magazines, short stories, poetry, theater, video games, movies, radio, television, song, and oral tradition—which continues in the modern era. Levy argues that the physical and mythological West-inspired composers Aaron Copland, Roy Harris, Virgil Thomson, Charles Wakefield Cadman, and Arthur Farwell. Religious themes have inspired many environmentalists as they contemplate the pristine West before the frontiersmen violated its spirituality. Actually, as a historian William Cronon has demonstrated, the concept of "wilderness" was highly negative and the antithesis of religiosity before the romantic movement of the 19th century. The Frontier Thesis of historian Frederick Jackson Turner, proclaimed in 1893, established the main lines of historiography which fashioned scholarship for three or four generations and appeared in the textbooks used by practically all American students. Popularizing Western lore The mythologizing of the West began with minstrel shows and popular music in the 1840s. During the same period, P. T. Barnum presented Indian chiefs, dances, and other Wild West exhibits in his museums. However, large scale awareness took off when the dime novel appeared in 1859, the first being Malaeska, the Indian Wife of the White Hunter. By simplifying reality and grossly exaggerating the truth, the novels captured the public's attention with sensational tales of violence and heroism and fixed in the public's mind stereotypical images of heroes and villains—courageous cowboys and savage Indians, virtuous lawmen and ruthless outlaws, brave settlers and predatory cattlemen. Millions of copies and thousands of titles were sold. The novels relied on a series of predictable literary formulas appealing to mass tastes and were often written in as little as a few days. The most successful of all dime novels was Edward S. Ellis' Seth Jones (1860). Ned Buntline's stories glamorized Buffalo Bill Cody, and Edward L. Wheeler created "Deadwood Dick" and "Hurricane Nell" while featuring Calamity Jane. Buffalo Bill Cody was the most effective popularizer of the Old West in the U.S. and Europe. He presented the first "Wild West" show in 1883, featuring a recreation of famous battles (especially Custer's Last Stand), expert marksmanship, and dramatic demonstrations of horsemanship by cowboys and Indians, as well as sure-shooting Annie Oakley. Elite Eastern writers and artists of the late 19th century promoted and celebrated western lore. Theodore Roosevelt, wearing his hats as a historian, explorer, hunter, rancher, and naturalist, was especially productive. Their work appeared in upscale national magazines such as Harper's Weekly featured illustrations by artists Frederic Remington, Charles M. Russell, and others. Readers bought action-filled stories by writers like Owen Wister, conveying vivid images of the Old West. Remington lamented the passing of an era he helped to chronicle when he wrote: 20th century imagery In the 20th century, both tourists to the West, and avid readers enjoyed the visual imagery of the frontier. The Western movies provided the most famous examples, as in the numerous films of John Ford. He was especially enamored of Monument Valley. Critic Keith Phipps says, "its have defined what decades of moviegoers think of when they imagine the American West." The heroic stories coming out of the building of the transcontinental railroad in the mid-1860s enlivened many dime novels and illustrated many newspapers and magazines with the juxtaposition of the traditional environment with the iron horse of modernity. Cowboy images The cowboy has for over a century been an iconic American image both in the country and abroad; recognized worldwide and revered by Americans. The most famous popularizers of the image include part-time cowboy and "Rough Rider" President Theodore Roosevelt (1858–1919), who made "cowboy" internationally synonymous with the brash aggressive American, and Indian Territory-born trick roper Will Rogers (1879–1935), the leading humorist of the 1920s. Roosevelt conceptualized the herder (cowboy) as a stage of civilization distinct from the sedentary farmer—a theme well expressed in the 1944 Hollywood hit Oklahoma! that highlights the enduring conflict between cowboys and farmers. Roosevelt argued that the manhood typified by the cowboy—and outdoor activity and sports generally—was essential if American men were to avoid the softness and rot produced by an easy life in the city. Will Rogers, the son of a Cherokee judge in Oklahoma, started with rope tricks and fancy riding, but by 1919 discovered his audiences were even more enchanted with his wit in his representation of the wisdom of the common man. Others who contributed to enhancing the romantic image of the American cowboy include Charles Siringo (1855–1928) and Andy Adams (1859–1935). Cowboy, Pinkerton detective, and western author, Siringo was the first authentic cowboy autobiographer. Adams spent the 1880s in the cattle industry in Texas and the 1890s mining in the Rockies. When an 1898 play's portrayal of Texans outraged Adams, he started writing plays, short stories, and novels drawn from his own experiences. His The Log of a Cowboy (1903) became a classic novel about the cattle business, especially the cattle drive. It described a fictional drive of the Circle Dot herd from Texas to Montana in 1882 and became a leading source on cowboy life; historians retraced its path in the 1960s, confirming its basic accuracy. His writings are acclaimed and criticized for realistic fidelity to detail on the one hand and thin literary qualities on the other. Many regards Red River (1948), directed by Howard Hawks, and starring John Wayne and Montgomery Clift, as an authentic cattle drive depiction. The unique skills of the cowboys are highlighted in the rodeo. It began in an organized fashion in the West in the 1880s, when several Western cities followed up on touring Wild West shows and organized celebrations that included rodeo activities. The establishment of major cowboy competitions in the East in the 1920s led to the growth of rodeo sports. Trail cowboys who were also known as gunfighters like John Wesley Hardin, Luke Short and others, were known for their prowess, speed and skill with their pistols and other firearms. Their violent escapades and reputations morphed over time into the stereotypical image of violence endured by the "cowboy hero". Code of the West Historians of the American West have written about the mythic West; the west of western literature, art and of people's shared memories. The phenomenon is "the Imagined West". The "Code of the West" was an unwritten, socially agreed upon set of informal laws shaping the cowboy culture of the Old West. Over time, the cowboys developed a personal culture of their own, a blend of values that even retained vestiges of chivalry. Such hazardous work in isolated conditions also bred a tradition of self-dependence and individualism, with great value put on personal honesty, exemplified in songs and cowboy poetry. The code also included the Gunfighter, who sometimes followed a form of code duello adopted from the Old South, in order to solve disputes and duels. Extrajudicial justice seen during the frontier days such as lynching, vigilantism and gunfighting, in turn popularized by the Western genre, would later be known in modern times as examples of frontier justice. Historiography Scores of Turner students became professors in history departments in the western states and taught courses on the frontier. Scholars have debunked many of the myths of the frontier, but they nevertheless live on in community traditions, folklore, and fiction. In the 1970s a historiographical range war broke out between the traditional frontier studies, which stress the influence of the frontier on all of American history and culture, and the "New Western History" which narrows the geographical and time framework to concentrate on the trans-Mississippi West after 1850. It avoids the word "frontier" and stresses cultural interaction between white culture and groups such as Indians and Hispanics. History professor William Weeks of the University of San Diego argues that in this "New Western History" approach: However, by 2005, Aron argues, the two sides had "reached an equilibrium in their rhetorical arguments and critiques". Meanwhile, environmental history has emerged, in large part from the frontier historiography, hence its emphasis on wilderness. It plays an increasingly large role in frontier studies. Historians approached the environment for the frontier or regionalism. The first group emphasizes human agency on the environment; the second looks at the influence of the environment. William Cronon has argued that Turner's famous 1893 essay was environmental history in an embryonic form. It emphasized the vast power of free land to attract and reshape settlers, making a transition from wilderness to civilization. Journalist Samuel Lubell saw similarities between the frontier's Americanization of immigrants that Turner described and the social climbing by later immigrants in large cities as they moved to wealthier neighborhoods. He compared the effects of the railroad opening up Western lands to urban transportation systems and the automobile, and Western settlers' "land hunger" to poor city residents seeking social status. Just as the Republican party benefited from support from "old" immigrant groups that settled on frontier farms, "new" urban immigrants formed an important part of the Democratic New Deal coalition that began with Franklin Delano Roosevelt's victory in the 1932 presidential election. Since the 1960s an active center is the history department at the University of New Mexico, along with the University of New Mexico Press. Leading historians there include Gerald D. Nash, Donald C. Cutter, Richard N. Ellis, Richard Etulain, Margaret Connell-Szasz, Paul Hutton, Virginia Scharff, and Samuel Truett. The department has collaborated with other departments and emphasizes Southwestern regionalism, minorities in the Southwest, and historiography. See also General The Oregon-California Trails Association preserves, protects and shares the histories of emigrants who followed these trails westward. Indian massacre, list of massacres of Indians by whites and vice versa. March (territorial entity) Medieval European term with some similarities National Cowboy & Western Heritage Museum: museum and art gallery, in Oklahoma City, Oklahoma, housing one of the largest collections in the world of the Western, American cowboy, American rodeo, and American Indian art, artifacts, and archival materials. Rodeo: demonstration of cattle wrangling skills. Territories of the United States. The West As America. Timeline of the American Old West Wanted poster: a poster, popular in mythic scenes of the west, let the public know of criminals whom authorities wish to apprehend. Western lifestyle Wild West shows: a following of the Wild West shows of the American frontier. People Gunfighter List of American Old West outlaws: list of known outlaws and gunfighters of the American frontier popularly known as the "Wild West". List of cowboys and cowgirls List of Western lawmen: list of notable law enforcement officials of the American frontier. They occupied positions as sheriff, marshal, Texas Rangers, and others. Schoolmarm: A female teacher that usually works in a one-room schoolhouse :Category:Gunslingers of the American Old West :Category:Lawmen of the American Old West :Category:Outlaws of the American Old West Study Desert Magazine Journal of the West True West Magazine Western History Association Literature Chris Enss: author of historical nonfiction that documents the forgotten women of the Old West. Zane Grey: author of many popular novels on the Old West Karl May: best selling German writer of all time, noted chiefly for wild west books set in the American West. Lorin Morgan-Richards: author of Old West titles and The Goodbye Family series. Winnetou: American-Indian hero of several novels written by Karl May. Games Aces & Eights: Shattered Frontier: an award-winning alternate history western role-playing gaming. Boot Hill: One of the early alternative RPGs from TSR and using a similar system to Dungeons & Dragons. Deadlands: an alternate history western horror role-playing game. Dust Devils: a western role-playing game modeled after Clint Eastwood films and similar darker Westerns. The Red Dead series takes place in the days of the Wild West. List of Western computer and video games: a list of computer and video games patterned after Westerns. Explanatory notes References Further reading Surveys Billington, Ray Allen, and Martin Ridge. Westward Expansion: A History of the American Frontier (5th ed. 2001); 892 pp; textbook with 160 –pp of detailed annotated bibliographies older edition online; also online 2001 abridged edition to borrow Billington, Ray Allen. The Far Western frontier, 1830–1860 (1962), Wide-ranging scholarly survey; online free Clark, Thomas D. The rampaging frontier: Manners and humors of pioneer days in the South and the middle West (1939). Deverell, William, ed. A Companion to the American West (Blackwell Companions to American History) (2004); 572 pp excerpt and text search Hawgood, John A. America's Western Frontiers (1st ed. 1967); 234 pp; textbook covering pre-Columbian era through the mid-twentieth century Heard, J. Norman. Handbook of the American Frontier (5 vol Scarecrow Press, 1987–98); Covers 1: The Southeastern Woodlands, 2: The Northeastern Woodlands, 3: The Great Plains, 4: The Far West and vol. 5: Chronology, Bibliography, Index. Compilation of Indian-white contacts & conflicts Hine, Robert V., and John Mack Faragher. The American West: A New Interpretive History (Yale University Press, 2000). 576 pp.; textbook Josephy, Alvin. The American heritage book of the pioneer spirit (1965) Lamar, Howard, ed. The New Encyclopedia of the American West (1998); this is a revised version of Reader's Encyclopedia of the American West ed. by Howard Lamar (1977) Milner, Clyde, Carol O'Connor, and Martha Sandweiss, eds. The Oxford History of the American West (1994) long essays by scholars; online free Paxson, Frederic Logan. History of the American frontier, 1763–1893 (1924), an old survey by leading authority; Pulitzer Prize Paxson, Frederic Logan. The Last American Frontier (1910) online free Snodgrass, Mary Ellen, ed. Settlers of the American West: The Lives of 231 Notable Pioneers, (2015) McFarland & Company, Utley, Robert M. The Story of The West (2003) White, Richard. "It's Your Misfortune and None of My Own": A New History of the American West (1991), textbook focused on the post-1890 far west Great Plains And land policy Gates, Paul W. "An overview of American land policy". Agricultural History (1976): 213–29. in JSTOR Gates, Paul W. "Homesteading in the High Plains". Agricultural History (1977): 109–33. in JSTOR Otto, John Solomon. The Southern Frontiers, 1607–1860: The Agricultural Evolution of the Colonial and Antebellum South (ABC-CLIO, 1989). Swierenga, Robert P. "Land Speculation and Its Impact on American Economic Growth and Welfare: A Historiographical Review". Western Historical Quarterly (1977) 8#3 pp: 283–302. in JSTOR Unruh, John David. The Plains Across: The Overland Emigrants and the Trans-Mississippi West, 1840–1860 (1993) Van Atta, John R. Securing the West: Politics, Public Lands, and the Fate of the Old Republic, 1785–1850 (2014) xiii + 294 pp. online review Historiography Billington, Ray Allen. America's Frontier Heritage (1984), a favorable analysis of Turner's theories about social sciences and historiography online Etulain, Richard W., "Clio's Disciples on the Rio Grande: Western History at the University of New Mexico", New Mexico Historical Review 87 (Summer 2012), 277–98. Hurtado, Albert L., "Bolton and Turner: The Borderlands and American Exceptionalism", Western Historical Quarterly, (Spring 2013) 44#1 pp. 5–20. Limerick, Patricia. The Legacy of Conquest: The Unbroken Past of the American West (1987), attacks Turner and promotes the New Western History Smith, Stacey L. "Beyond North and South: Putting the West in the Civil War and Reconstruction", Journal of the Civil War Era (Dec 2016) 6#4 pp. 566–91. excerpt Spackman, S. G. F. "The Frontier and Reform in the United States." Historical Journal 13#2 (1970): 333–39. online. Weber, David J. "The Spanish Borderlands, Historiography Redux." The History Teacher, 39#1 (2005), pp. 43–56., online. Images and memory Brégent-Heald Dominique. "Primitive Encounters: Film and Tourism in the North American West", Western Historical Quarterly (2007) 38#1 (Spring, 2007), pp. 47–67 in JSTOR Etulain, Richard W. Re-imagining the Modern American West: A Century of Fiction, History, and Art (1996) Hyde, Anne Farrar. An American Vision: Far Western Landscape and National Culture, 1820–1920 (New York University Press, 1993) Prown, Jules David, Nancy K. Anderson, and William Cronon, eds. Discovered Lands, Invented Pasts: Transforming Visions of the American West (1994) Rothman, Hal K. Devil's Bargains: Tourism and the Twentieth-Century American West (University of Kansas Press, 1998) Wrobel, David M. Global West, American Frontier: Travel, Empire, and Exceptionalism from Manifest Destiny to the Great Depression (University of New Mexico Press, 2013) 312 pp.; evaluates European and American travelers' accounts Primary sources Phillips, Ulrich B. Plantation and Frontier Documents, 1649–1863; Illustrative of Industrial History in the Colonial and Antebellum South: Collected from MSS. and Other Rare Sources. Two volumes. (1909). vol 1 & 2 online Watts, Edward, and David Rachels, eds. The First West: Writing from the American Frontier, 1776–1860 (Oxford UP, 2002), 960 pp; primary sources excerpt and text search, long excerpts from 59 authors California Gold Rush, from PBS 1860s and 1870s from PBS 1880s from PBS "Closing the Frontier", from U California Scholarly articles Full text of all articles in Western Historical Quarterly, 1972 to present Great Plains Quarterly Table of contents, 1981 to present; 2014 to present online articles External links Culture Western Folklife Center History Autry National Center of the American West – Los Angeles, California American West History New Perspectives on 'The West'. The West Film Project, WETA-TV, 2001 Dodge City, Kansas 'Cowboy Capital' Fort Dodge, Kansas History by Ida Ellen Rath, 1964 w/ photos Old West Kansas Tombstone Arizona History "The American West", BBC Radio 4 discussion with Frank McLynn, Jenni Calder and Christopher Frayling (In Our Time, June 13, 2002) Media The Frontier: A Frontier Town Three Months Old by Ward Platt – 1908 book on the real West. Free to read and full-text search. 161 photographs of frontier geography and personalities; these are pre-1923 and out of copyright 18th century in the United States 1959 disestablishments in the United States 19th century in the United States 20th century in the United States American folklore Culture of the Western United States History of United States expansionism Western United States Frontier
46063
https://en.wikipedia.org/wiki/Case%20sensitivity
Case sensitivity
In computers, case sensitivity defines whether uppercase and lowercase letters are treated as distinct (case-sensitive) or equivalent (case-insensitive). For instance, when users interested in learning about dogs search an e-book, "dog" and "Dog" are of the same significance to them. Thus, they request a case-insensitive search. But when they search an online encyclopedia for information about the United Nations, for example, or something with no ambiguity regarding capitalization and ambiguity between two or more terms cut down by capitalization, they may prefer a case-sensitive search. Areas of significance Case sensitivity may differ depending on the situation: Searching: Users expect information retrieval systems to be able to have correct case sensitivity depending on the nature of an operation. Users looking for the word "dog" in an online journal probably does not wish to differentiate between "dog" or "Dog", as this is a writing distinction; the word should be matched whether it appears at the beginning of a sentence or not. On the other hand, users looking for information about a brand name, trademark, human name, or city name may be interested in performing a case-sensitive operation to filter out irrelevant results. For example, somebody searching for the name "Jade" would not want to find references to the mineral called "jade". On the English Wikipedia for example a search for Friendly fire returns the military article but Friendly Fire (capitalized "Fire") returns the disambiguation page. Usernames: Authentication systems usually treat usernames as case-insensitive to make them easier to remember, reducing typing complexity, and eliminate the possibility of both mistakes and fraud when two usernames are identical in every aspect except the case of one of their letters. However, these systems are not case-blind. They preserve the case of the characters in the name so that users may choose an aesthetically pleasing username combination. Passwords: Authentication systems usually treat passwords as case-sensitive. This enables the users to increase the complexity of their passwords. File names: Traditionally, Unix-like operating systems treat file names case-sensitively while Microsoft Windows is case-insensitive but, for most file systems, case-preserving. For more details, see below. Variable names: Some programming languages are case-sensitive for their variable names while others are not. For more details, see below. URLs: The path, query, fragment, and authority sections of a URL may or may not be case-sensitive, depending on the receiving web server. The scheme and host parts, however, are strictly lowercase. In programming languages Some programming languages are case-sensitive for their identifiers (C, C++, Java, C#, Verilog, Ruby, Python and Swift). Others are case-insensitive (i.e., not case-sensitive), such as ABAP, Ada, most BASICs (an exception being BBC BASIC), Fortran, SQL (for the syntax, and for some vendor implementations, e.g. Microsoft SQL Server, the data itself) and Pascal. There are also languages, such as Haskell, Prolog, and Go, in which the capitalisation of an identifier encodes information about its semantics. Some other programming languages have varying case sensitivity; in PHP, for example, variable names are case-sensitive but function names are not case-sensitive. This means that if you define a function in lowercase, you can call it in uppercase, but if you define a variable in lowercase, you cannot refer to it in uppercase. Nim is case-insensitive and ignores underscores, as long as the first characters match. In text search A text search operation could be case-sensitive or case-insensitive, depending on the system, application, or context. The user can in many cases specify whether a search is sensitive to case, e.g. in most text editors, word processors, and Web browsers. A case-insensitive search is more comprehensive, finding "Language" (at the beginning of a sentence), "language", and "LANGUAGE" (in a title in capitals); a case-sensitive search will find the computer language "BASIC" but exclude most of the many unwanted instances of the word. For example, the Google Search engine is basically case-insensitive, with no option for case-sensitive search. In Oracle SQL most operations and searches are case-sensitive by default, while in most other DBMS's SQL searches are case-insensitive by default. Case-insensitive operations are sometimes said to fold case, from the idea of folding the character code table so that upper- and lowercase letters coincide. In filesystems In filesystems in Unix-like systems, filenames are usually case-sensitive (there can be separate readme.txt and Readme.txt files in the same directory). MacOS is somewhat unusual in that, by default, it uses HFS+ and APFS in a case-insensitive (so that there cannot be a readme.txt and a Readme.txt in the same directory) but case-preserving mode (so that a file created as readme.txt is shown as readme.txt and a file created as Readme.txt is shown as Readme.txt) by default. This causes some issues for developers and power users, because most file systems in other Unix-like environments are case-sensitive, and, for example, a source code tree for software for Unix-like systems might have both a file named Makefile and a file named makefile in the same directory. In addition, some Mac Installers assume case insensitivity and fail on case-sensitive file systems. The older MS-DOS filesystems FAT12 and FAT16 were case-insensitive and not case-preserving, so that a file whose name is entered as readme.txt or ReadMe.txt is saved as README.TXT. Later, with VFAT in Windows 95 the FAT file systems became case-preserving as an extension of supporting long filenames. Later Windows file systems such as NTFS are internally case-sensitive, and a readme.txt and a Readme.txt can coexist in the same directory. However, for practical purposes filenames behave as case-insensitive as far as users and most software are concerned. This can cause problems for developers or software coming from Unix-like environments, similar to the problems with macOS case-insensitive file systems. Notes References Articles containing video clips Capitalization
537345
https://en.wikipedia.org/wiki/Universities%20at%20Shady%20Grove
Universities at Shady Grove
The Universities at Shady Grove began in 1992 as part of the University of Maryland University College. In 2000, it reformed under its present name. Daytime, evening and weekend classes are offered at Shady Grove to students studying in 80 undergraduate, graduate, degree and certificate programs. Established in 2000, The Universities at Shady Grove (USG) is a regional higher education center of the University System of Maryland that offers students access to undergraduate and graduate degree programs from nine universities on one campus in Rockville, Maryland. There is no separate application for USG; prospective students seeking to enroll in a program at USG apply directly to the university offering their desired program. Students are taught by the same professors, take the same courses, and have the same curriculum as students enrolled in that program at their university's main campus. Students take their classes at USG, but earn their degree from the university offering their program. Academics As a regional higher education center of the University System of Maryland, undergraduate and graduate degree programs are offered on the USG campus from nine Maryland universities: Bowie State University Ed.D in Education M.Ed in Education Salisbury University B.S. in Exercise Science M.S. in Applied Health Physiology B.F.A. in Art, Graphic Design Towson University B.S. in Early Childhood Education B.S. in Elementary Education/Special Education M.A.T. in Special Education M.Ed. in Early Childhood Education M.Ed. in Special Education University of Baltimore B.S. in Health Systems Management B.S. in Simulation and Game Design M.A. in Integrated Design M.P.A. in Public Administration M.P.S. in Justice Leadership and Management M.S. in Forensic Science - High Technology Crime M.S. in Health Systems Management Doctor of Public Administration Graduate Certificate in Government Financial Management University of Maryland, Baltimore B.S. in Nursing (RN-to-BSN) B.S. in Nursing (Traditional Option) M.S. in Medical Cannabis Science and Therapeutics M.S. in Pharmaceutical Sciences Masters in Social Work Doctor of Nursing Practice | Family Nurse Practitioner University of Maryland, Baltimore County B.A. in History B.A. in Political Science B.A. in Psychology B.A. in Social Work B.S. in Translational Life Science Technology M.P.S. in Biotechnology M.P.S. in Cybersecurity M.P.S. in Data Science M.P.S. in Geographic Information Systems M.P.S. in I/O Psychology M.P.S. in Technical Management University of Maryland, College Park B.A. in Communication B.A. in Criminology and Criminal Justice B.S. in Accounting B.S. in Biological Sciences B.S. in Information Science (BSIS) B.S. in Management B.S. in Marketing B.S. in Public Health Science M.Ed. in Human Development M.Ed. in Math Education: Special Studies in Middle School Math M.Ed. in Special Education/Severe Disabilities with a focus on Autism Spectrum Disorders for Teachers, Professionals, and Professional Staff M.Ed. in Teacher Leadership: Special Studies in STEM Education Master of Business Administration Master of Information Management (MIM) Master of Library and Information Science (MLIS) Professional Master in Engineering Graduate Certificate in Engineering (GCEN) Post Baccalaureate Certificate in Global Health University of Maryland Eastern Shore B.S. in Construction Management Technology B.S. in Hospitality and Tourism Management University of Maryland Global Campus B.A. in Communication Studies B.S. in Accounting B.S. in Business Administration B.S. in Computer Networks and Cybersecurity B.S. in Cybersecurity Management and Policy B.S. in Digital Media and Web Technology B.S. in Human Resource Management B.S. in Information Systems Management B.S. In Investigative Forensics B.S. in Public Safety Administration B.S. in Software Development and Security B.T.P.S. in Biotechnology B.T.P.S. in Laboratory Management M.S. in Biotechnology M.S. in Health Care Administration M.S. in Information Technology M.S. in Management References External links Official site Bowie State University Towson University University of Baltimore University of Maryland, Baltimore University of Maryland, Baltimore County University of Maryland, College Park University of Maryland Eastern Shore University of Maryland Global Campus University System of Maryland Two year upper class colleges Salisbury University Educational institutions established in 2000 Universities and colleges in Montgomery County, Maryland 2000 establishments in Maryland
8618334
https://en.wikipedia.org/wiki/TCP%20acceleration
TCP acceleration
TCP acceleration is the name of a series of techniques for achieving better throughput on a network connection than standard TCP achieves, without modifying the end applications. It is an alternative or a supplement to TCP tuning. Commonly used approaches include checksum offloading, TCP segmentation and reassembly offloading, DMA offloading, ACK pacing, TCP transparent proxies in two or more middleboxes, and TCP offload engines. TCP transparent proxies TCP transparent proxies involve breaking up of long end-to-end control loops to several smaller control loops by intercepting and relaying TCP connections within the network. By adopting this procedure, they allow for the TCP flows to have a shorter reaction time to packet losses which may occur within the network and thus guarantees a higher throughput. The idea of a TCP accelerator is to terminate TCP connections inside the network processor and then relay the data to a second connection toward the end system. The data packets that originate from the sender are buffered at the accelerator node, which is responsible for performing local retransmissions in the event of packet loss. Thus, in case of losses, the feedback loop between the sender and the receiver is shortened to the one between the acceleration node and the receiver which guarantees a faster delivery of data to the receiver. Since TCP is a rate-adaptive protocol, the rate at which the TCP sender injects packets into the network is directly proportional to the prevailing load condition within the network as well as the processing capacity of the receiver. The prevalent conditions within the network are judged by the sender on the basis of the acknowledgments received by it. The acceleration node splits the feedback loop between the sender and the receiver and thus guarantees a shorter round trip time (RTT) per packet. A shorter RTT is beneficial as it ensures a quicker response time to any changes in the network and a faster adaptation by the sender to combat these changes. Disadvantages of the method include the fact that the TCP session has to be directed through the accelerator; this means that if routing changes, so that the accelerator is no longer in the path, the connection will be broken. It also destroys the end-to-end property of the TCP ack mechanism; when the ACK is received by the sender, the packet has been stored by the accelerator, not delivered to the receiver. Asymmetric TCP acceleration While TCP proxies require such devices to be deployed at both parties of the communication because the protocol running between the proxies is usually proprietary, asymmetric TCP acceleration is able to boost the network performance with unilateral deployment, meaning only one end of the peer connection is required to deploy the device or software. Asymmetric TCP acceleration implies the WAN-side protocol has to be TCP of the same 5-tuples and states. The implementations typically terminate the TCP flows on the LAN side like the TCP proxies do. On the WAN side, however, they mirror the TCP state machines and establish (forward) the TCP flows to the peers. To accelerate, they usually run a compatible version of TCP with performance improvements on the WAN side. While most of the improvement services (such as FAST TCP, Zeta-TCP, ViBE, Sandvine, Wanos)etc. focus on the TCP congestion avoidance algorithm, some also attempt to promote the performance of other aspects of the protocol. For instance, Zeta-TCP provides more accurate loss detection and heuristic download acceleration in addition to its congestion avoidance algorithm, and ViBE enhances VoIP stability. Compared with the symmetric TCP proxies, asymmetric TCP acceleration is more flexible in all kinds of deployment scenarios. A typical setup is to deploy the asymmetric acceleration device on the server side only. Then all the accessing clients, without having to install any extra software, will benefit from it. Performance-wise, without compression factors, asymmetric TCP acceleration is capable of offering the same level of improvement as the symmetric ones. However, with symmetric deployment, the proxies are able to perform data compression and caching operations which further boost the performance by a factor of the compression ratio. The drawback of the compression/caching, though, is added latency and burst on the receiver side. See also FAST TCP TCP tuning TCP congestion avoidance algorithm Acceleration WAN optimization Network acceleration he:מאיץ תקשורת#מאיץ TCP
42954334
https://en.wikipedia.org/wiki/2014%20Brickyard%20400
2014 Brickyard 400
The 2014 Crown Royal Presents the John Wayne Walding 400 at the Brickyard Powered by BigMachineRecords.com, the 21st running of the event, was a NASCAR Sprint Cup Series stock car race that was held on July 27, 2014, at the Indianapolis Motor Speedway in Speedway, Indiana. Contested over 160 laps, it was the 20th race of the 2014 NASCAR Sprint Cup Series season. Twenty years after he won the inaugural race, Jeff Gordon of Hendrick Motorsports took the lead on the final restart and drove away from the field for his 90th career victory and a record-breaking fifth win at Indianapolis. Kyle Busch finished second, while Denny Hamlin, Matt Kenseth and Joey Logano rounded out the top five. The top rookies of the race were Kyle Larson (7th), Austin Dillon (10th), and Justin Allgaier (27th). Previous race Two weeks prior, Brad Keselowski held off a green-white-checker charge by Kyle Busch to win the Camping World RV Sales 301 at New Hampshire Motor Speedway. Keselowski described the performance as "definitely good for when we come back here in September", but also stated that his team "have to keep working and plugging away". Report Background Indianapolis Motor Speedway is a four-turn rectangular-oval track that is long. The track's turns are banked at 9 degrees, while the front stretch, the location of the finish line, has no banking. The back stretch, opposite of the front, also has a zero degree banking. The track's front and back straightaway are both , while the short straightaways between turn one and two, as well as between turn three and four are long. The racetrack has seats for more than 250,000 spectators. The defending race winner from 2013 was Ryan Newman. Crown Royal's "Your Hero Name Here" program selected John Wayne Walding for the race name. Walding, a former member of the Green Berets, was serving in Afghanistan when a sniper shot, in the Battle of Shok Valley, forced the amputation of his lower right leg. The 2014 race also marked the introduction of a new video board at the start-finish line replacing the scoring pylon in use since 1994. The board debuted for this weekend. Entry list The entry list was released on Tuesday, July 22, 2014 at 10:26 a.m. Eastern time. Forty-six drivers were entered for the race. 2013 NASCAR Camping World Truck Series champion Matt Crafton entered the race in the No. 29 RAB Racing Toyota, attempting to make his Cup debut. IndyCar Series driver and former Cup driver Juan Pablo Montoya returned to NASCAR in the No. 12 Team Penske Ford to make his second start of the season. 2000 Brickyard 400 winner Bobby Labonte entered the race in the No. 37 Tommy Baldwin Racing Chevrolet. Crafton, Montoya, and Trevor Bayne were required to make the race via speed, due to having no owners' championship points or being too low in owners' points, while Labonte had the advantage of using a past champion's provisional. Practice Two practice sessions were held at the track, on Friday at 11:35 am local time, and on Saturday at 9 am, three hours before the qualifying session. First practice Matt Kenseth was the fastest in the first practice session with a time of 48.313 and a speed of . Final practice Jimmie Johnson was the fastest in the final practice session with a time of 47.544 and a speed of . Qualifying In qualifying, Kevin Harvick won the pole with a new track record time of 47.753 and a speed of ; he had been quickest in each of the three segments of the session. Harvick noted the benefit of having the last pit stall on pit road, stating that it was "going to take some pressure off the guys for sure". and also felt that if he had fallen down the order, he believed that his car was quick enough as he believed that "track position is definitely important". Jeff Gordon joined Harvick on the front row, almost two tenths of a second in arrears. Gordon referred to the performance of Harvick and his team as "they had the field covered". Gordon's teammate Dale Earnhardt, Jr. – who was second in points to Gordon coming into the race weekend – could only qualify 23rd, describing his session as “pretty pathetic" and "real slow". Matt Crafton, Brett Moffitt and David Stremme failed to make the race. Aric Almirola and Marcos Ambrose started at the rear of the field for switching to a backup car and a transmission change respectively. Race First half The race was scheduled to start at 1:19 p.m. Eastern time, but started a few minutes later with Kevin Harvick leading the field to the green flag, but he ceded the lead to Jeff Gordon on lap two. Due to overnight downpours, a competition caution came out on lap 21. Joey Logano stayed out when the others pitted and assumed the lead, leading the field to the restart on lap 26. Kasey Kahne took the lead from Logano on lap 32 while Paul Menard brushed the wall in turn 3 after being bumped by Juan Pablo Montoya. However, Kahne gave up the lead on lap 38 to pit, with Kyle Larson assuming the lead, handing the lead to Austin Dillon after pitting on lap 43. Dillon made his stop and handed the lead to Denny Hamlin the next lap. Hamlin made his stop on lap 55 and handed the lead back to Kevin Harvick. Gordon retook the lead from Harvick on lap 66 and then both ducked onto pit road. Hamlin retook the lead as a result. Second half Danica Patrick broke the rear axle of her when trying to leave pit road and stalled on the exit of pit road bringing out the second caution of the race on lap 68. The race restarted on lap 73 and Denny Hamlin lost the lead to Kasey Kahne. The caution flag came out for the third time on lap 97 when Trevor Bayne got loose and collected the inside wall in turn 3. Clint Bowyer did not pit during the caution period so he assumed the lead. The race restarted on lap 102 and Bowyer lost the lead to Kahne. Kahne made his final stop on lap 127 and handed the lead to his teammate Gordon. With 31 laps to go, Gordon made his final stop and handed the lead to Martin Truex, Jr.; Truex made his stop and gave the lead to Michael Annett. Kahne cycled back to the lead with 30 laps to go, before Ryan Truex stalled in turn 2 bringing out the fourth caution of the race with 22 laps to go. Finish Gordon took the lead on the ensuing restart and took the checkered flag for the 90th time in his career and for the 4th time in the Brickyard 400. The win guaranteed Gordon a spot in the Chase for the Sprint Cup; Carl Edwards, Jimmie Johnson and Joey Logano also clinched spots. Kahne fell to fifth on the restart, and ran out of fuel on the final lap, finishing sixth. Gordon described his race win as "nothing better, especially in a big race, coming to Victory Lane with your family here", while he "was trying so hard with 10 to go not to focus on the crowd". Kahne reflected on his position at the final restart, stating that he should have picked the outside line, also stating "pretty much let Jeff control that restart. I took off and never spun a tire and the inside had been more grip throughout the race and I started on the inside and I thought it was a great decision. But I didn't spin a tire and Jeff drove right by me." Post-race On the Tuesday following the race, NASCAR announced that the No. 11 team of Joe Gibbs Racing – the car of Denny Hamlin, who had finished the race in third place – had been penalized for a rules infraction in post-race inspection. This infraction was levied as a P5 penalty – the second-highest level – outlined in Section 12–4.5 A (9) of the 2014 NASCAR rule book. Per the subsequent Section 12–4.5 B of the regulations, any P5 penalty resulted in a 50-point penalty for both the driver and team owner, a fine of between $75,000 and $125,000 as well as race suspension and probation periods for team members in relation to the infraction. As the infraction was detected during a post-race inspection, a further 25 championship points were deducted as well as a further fine of $50,000. The infraction also violated several other Sections from the rule book: 12-1 – Actions detrimental to stock car racing; 20–2.1 – Car body must be acceptable to NASCAR officials and meet the following requirements: K – Any device or ductwork that permits air to pass from one area of the interior of the car to another, or to the outside of the car, will not be permitted. This includes, but is not limited to, the inside of the car to the trunk area, or the floors, firewalls, crush panels and wheel wells passing air into or out of the car; L – All seams of the interior sheet metal and all interior sheet metal to exterior sheet metal contact point must be sealed and caulked. This includes, but is not limited to, floors, firewalls, wheel wells, package trays, crush panels and any removable covers; 20–3.4 – All references to the inspection surface in sub-section 20–3.4 have been determined with the front lower edge of both main frame rails set at six inches and the rear lower edge of both main frame rails set at eight inches. For driver protection, all firewalls, floors, tunnels, and access panels must be installed and completely secured in place when the car is in competition; 20–3.4.5 – A rear firewall, including any removable panels or access doors, constructed using magnetic sheet steel a minimum of 22 gauge (0.031 inch thick), must be located between the trunk area and the driver's compartment and must be welded in place. Block-off plates/covers used in rear firewalls in place of blowers, oil coolers, etc., must be constructed of 22 gauge (0.031 inch thick) magnetic sheet steel. Block-off plates/covers must be installed with positive fasteners and sealed to prevent air leakage. Carbon fiber or aluminum block-off plates/covers will not be permitted. Accordingly, crew chief Darian Grubb was fined $125,000 post-race, suspended for the next six races and placed on NASCAR probation for a six-month period – until January 29, 2015 – while car chief Wesley Sherrill was also suspended six races and placed on NASCAR probation until the same date. Denny Hamlin lost 75 drivers' championship points, while the team lost 75 points in the owners' championship. Race results Race summary Lead changes: 15 Cautions: 4 for 16 laps Red flags: 0 Time of race: 2 hours, 39 minutes and 41 seconds Average Speed: Media Television Radio Standings after the race Drivers' Championship standings Manufacturers' Championship standings Note: Only the first sixteen positions are included for the driver standings. Notes References Brickyard 400 Brickyard 400 Brickyard 400 NASCAR races at Indianapolis Motor Speedway
28383690
https://en.wikipedia.org/wiki/Igor%3A%20Objective%20Uikokahonia
Igor: Objective Uikokahonia
Igor: Objective Uikokahonia is a 1994 graphic adventure game developed by the Spanish company Pendulo Studios and published by DROsoft. The game tells the story of Igor Parker, a university student in love with a classmate named Laura Wright. Hoping to win her affection, Igor surmounts a series of obstacles in an effort to join her on a field trip to the island paradise of Uikokahonia. The player assumes the role of Igor and navigates the campus while collecting items, solving puzzles and conversing with non-player characters. Igor entered development in 1993, as a part-time project between friends Ramón Hernáez, Rafael Latiegui, Felipe Gómez Pinilla and Miguel Angel Ramos. All four were fans of adventure games, and they hoped to emulate existing titles in the genre. Part of their motivation derived from the historic nature of Igor, since a graphic adventure game of its type had never been developed in Spain. After securing a publishing deal, the team incorporated as Pendulo Studios and hired freelancers for the game's music and backgrounds. Spain's weak game industry hampered production: Pendulo's working conditions were poor, and Igor was made on a low budget below 400,000 pesetas. The game was completed after roughly nine months. In Spain, Igor received positive reviews from magazines such as Super PC and Micromanía. It sold well enough to encourage Pendulo to pursue game development as a profession, although the company was disappointed by the sales of the English localization. Since its release, Igor has been cited as a landmark Spanish game, as an important stepping stone for Spanish game development and as an inspiration for other Spanish companies to create graphic adventures. It was officially released as freeware in 2007. With Igor behind it, Pendulo proceeded to become a highly regarded developer in Spain via titles such as Hollywood Monsters and Runaway: A Road Adventure. By 2019, it was Spain's longest-running game development studio. Gameplay and plot Igor: Objective Uikokahonia is a graphic adventure game controlled with a point-and-click interface, in a style that has been compared to LucasArts adventure games. Gameplay involves navigating the game world, conversing with non-player characters, solving puzzles and gathering and using items. The player interacts via a mouse by selecting actions, such as "Look" or "Take", from a list in the lower part of the screen. These actions may then be performed on hot spots in the game world. Below the action list is the player's inventory, which stores items for later use. During conversations with other characters, the inventory is replaced by dialogue options. Igor does not contain character death or dead-end states: the player may attempt any action without risking a game over. The game begins as Igor Parker, a university student, overhears his crush Laura Wright speaking with the campus playboy, Philip Goolash. In the conversation, Igor learns that Laura is going with the Biology class on a field trip to the island paradise of Uikokahonia. Philip, who is infatuated with Laura, opts to join her. Igor believes that the trip is his own opportunity to get closer to Laura and plans to come along, with the goal of defeating his rival. However, Igor is not enrolled in Biology and cannot afford the admission fee for the field trip. As a third requirement, he must also present a final project for the semester. Igor begins by distracting the dean's secretary and secretly writing his name in her files to enroll himself in Biology. He enlists the school nerd, Harrison, to do his Biology schoolwork in exchange for a blind date with a girl on campus. To pay for the trip, Igor tracks down a statue recently stolen from a museum by the art thief Ricky the Weasel, and turns it over to the police for reward money. While carrying out these tasks, Igor several times runs into Laura, who consistently sees him performing what appear to be good deeds. Igor's plans go awry when the Biology professor catches him cheating on his final exam, and Igor is barred from the field trip to Uikokahonia. However, he is able to improvise by smuggling himself in a crate bound for the island, and he parachutes down to Uikokahonia. In the process, he loses most of the items from his inventory, including a boomerang. He reaches Uikokahonia to find Philip acting aggressively toward Laura, and Igor is nearly drawn into a fight, but his boomerang returns and knocks Philip out. His victory and previous actions throughout the game ultimately win Laura over, and the two kiss as the game ends. Development Igor: Objective Uikokahonia originated in early 1993 when Ramón Hernáez, a Spanish programmer of business software, became interested in developing his own graphic adventure game. He was joined on the project by his friends Rafael Latiegui, Felipe Gómez Pinilla and Miguel Angel Ramos around the middle of that year. The team's members had been students together in college, and Hernáez and Pinilla—another business software programmer—were childhood friends. According to Latiegui, the team felt that it could create an adventure title of equal quality to others on the market, and intended Igor "[f]rom the very beginning" as a commercial release rather than an amateur project. The four were encouraged by their belief that "many published things did not have great quality", Latiegui noted. Everyone on the team was a fan of adventure games, and Hernáez cited Igors historic significance as further inspiration to pursue the idea. While Spanish developers like Aventuras AD had released interactive fiction games during the 1980s, including those with graphics, a graphic adventure game like Igor had never been made in Spain. According to Latiegui, Igors development was "absolutely amateurish". He later called it "more of an exercise to start learning", and noted that the team was self-taught throughout production. The four members did not consider player experience or financial issues while creating Igor, Hernáez explained, but simply made a game that interested them personally. They emulated other graphic adventure titles from the time, especially Indiana Jones and the Fate of Atlantis, although Latiegui and Ramos stressed that the team tried not to copy rival games directly. The decision not to include character death was "a kind of principle that we have set for all our future games", Latiegui said in 1994. During the initial months, the four worked on Igor inconsistently and in their spare time, without a professional schedule. They began by creating a game engine, written in Pascal. Pinilla and Latiegui later cited quality concerns with the resultant code. With the engine ready, the team conceived Igors plot, which Latiegui described as a simple boy-meets-girl story. He wrote that the script "suffered ten thousand changes" and was influenced by the team's time in college; its use of archetypical school story characters derived "unconsciously" from the team's experiences, he said. Similarly, Ramos stated that the game's Spanish sense of humor was a natural and "unintentional" outgrowth of the team's interests. The team proceeded to create a demo for Igor and display it to prospective publishers, such as DROsoft and Erbe Software. Offers quickly emerged, although certain publishers were skeptical, and one refused to consider the project because it perceived Spanish games as low-quality. Hernáez said that the group "really saw that [it] could move forward with the project" around three months into Igors development. While Dinamic Multimedia bid on the game, the team ultimately chose DROsoft. According to Pinilla, the realization that the game would be published inspired him and the others to incorporate. Latiegui, Hernáez, Pinilla and Ramos co-founded Pendulo Studios in Madrid during September 1993, with the goal of completing Igor. They began to develop the game in a more structured, full-time manner, and soon contracted the freelance artist Carlos Veredas, who created the game's background artworks. They were illustrated on paper before being digitized. For the soundtrack, Pendulo employed freelancer Esteban Moreno. Hernáez later called locating and paying a background artist the "most complicated part" of Igors development, as the team needed someone who would work for very little money. As a Spanish game, Igor was developed in difficult economic circumstances. Spain's game industry had thrived in the golden age of Spanish software, but the 16-bit era heralded a market collapse for game development in the country. Much of the scene had disappeared by the early 1990s. Pendulo noted that "there was practically no national production and the prospects were not too encouraging" when Igor began, and Latiegui recalled a moment when the team nearly abandoned the project because of low financial projections by publishers. The game's budget was small—Gerard Masnou of GameLive PC later reported that it was below 400,000 pesetas—and the team's working conditions were poor. Pendulo was located for a time in the corner of a warehouse in Madrid's Arganda del Rey municipality. Writing in 1994, Francisco Gutiérrez of PC Manía called Igors existence "extraordinary" given the state of the industry, and cited Pendulo as one of Spain's few current developers to successfully see a game to completion. In retrospect, Pinilla summarized the company's situation as a serious challenge, and said that it only grew harder after Igor was finished. The game underwent a fast production cycle and wrapped roughly six months after full-time production started. DROsoft launched Igor in summer 1994, on floppy disks and exclusively for DOS. Versions Following the release of Igor in Spain, Pendulo Studios signed with Infogrames in 1994 to publish the game across Europe. A writer for PC Manía opined at the time that this announcement could mark "the first important step toward the resurgence of national [Spanish] software." Pendulo began by rewriting the game's engine: according to Ramón Hernáez, the team realized that "the engine had to be improved to offer more quality". Igors character graphics were redone, and the upgraded engine employed protected mode and featured support for voice-overs, enhanced scrolling, a 320x200 screen resolution and other features. However, Rafael Latiegui later said that this edition of Igor was canceled "due to internal problems" at Infogrames. Another version of the game, which contains new backgrounds and coding upgrades, was developed by Pendulo for the United States market. It was published in the country by Optik Software in 1995. Pendulo later characterized Optik's distribution job as poor, and argued that sales were low as a result. Latiegui accused the publisher of failing to pay Pendulo any royalties. Pendulo declared it a mistake to sign with Optik, and reported that "we paid for our inexperience by rejecting an offer that would have been more interesting". Igor ultimately failed to see success outside Spain. In mid-1998, Igor was relaunched via a promotional campaign by the Spanish newspaper El Mundo, in partnership with Dinamic Multimedia. Released on CD-ROM, this edition of the game retailed for 720 pesetas and was available as an optional purchase alongside the paper. It includes the updates from the United States release, but adds full voice-overs and improvements to the music and sound effects. Optik had also released a CD-ROM version of Igor, complete with English voice acting, but Agustin Cordes of Just Adventure wrote that it had become "one of the most difficult to find versions of any game" by 2004. Reception According to HobbyConsolas, Igor: Objective Uikokahonia was well-received by adventure game players but "did not have the sales success they [Pendulo Studios] expected." However, Rafael Latiegui called it successful enough to inspire the team to pursue game development as a profession. He said in 1998 that the game ultimately "served more than anything to make us known". Igors visuals received praise from both Micromanía and PC Manía, while Antonio J. Martinez of Super PC found the characters, screen scrolling and background artwork "quite well achieved". Writers from all three publications cited the soundtrack as a high point. Martinez summarized, "Despite its simplicity, there is no doubt that Igor is a good national premiere for this style of game." In Micromanía, Enrique Ricart went further, writing that Igor was of a "quality that [had] nothing to envy" from international competitors such as Indiana Jones and the Fate of Atlantis. He praised Igors story and "logical" gameplay, and called the game an encouraging sign for the Spanish game industry, a sentiment echoed in PC Manías review. In a 2004 retrospective review, Agustin Cordes of Just Adventure described Igor as "neither revolutionary or novel" despite its "huge impact" domestically, although he enjoyed the game's humor and visuals. He argued that Igor had "stood the test of time" but was nevertheless unsatisfying because of its unrealized potential. Legacy Shortly after the release of Igor: Objective Uikokahonia, PC Manía cited the game and its developer as part of a positive trend in Spanish games, which saw "more and more companies join[ing] the slow national scene." As the first graphic adventure game developed in Spain, Igor "encouraged other [domestic] companies to devote themselves" to the genre, according to GameLive PC. Alejando Alcolea of HobbyConsolas retrospectively highlighted it as one of the two most important domestic games, alongside Pyro Studios' Commandos: Behind Enemy Lines, to be released in the 1990s after the Spanish game crash. In 2015, academic researchers Manuel Garin and Víctor Manuel Martínez called Igor "the first great achievement of the genre fully produced in Spain" and listed Pendulo as one of the "Five Significant Exceptions within the Decline of the 1990s", along with firms such as Pyro and Gaelco. With Igor as its first game, Pendulo proceeded to become a major force in Spanish game development. While Spanish studios often worked in multiple genres, the team chose to remain fully focused on adventure games. Vandals Julio Gómez later highlighted Igor as the beginning of Spain's "most important" adventure game developer; writers for publications such as PC Manía and El Mundo declared Pendulo the Spanish equivalent to LucasArts. Manuel and Martínez characterized Pendulo as one of the companies that "not only defined indigenous production in the 1990s but also shaped its future in the decades that followed." After Igors release, Miguel Angel Ramos left the team; Rafael Latiegui, Ramón Hernáez and Felipe Gómez Pinilla continued Pendulo alone. Their second game was Hollywood Monsters (1997), in which they diverged from the development process used for Igor. Pinilla explained that they "started from scratch in everything". The game was a critical and commercial hit, with sales above 250,000 units in Spain by 2011, and it marked a turning point for Pendulo. Despite calling Igor a "great adventure", Francisco Delgado of Micromanía stated that Hollywood Monsters "was the game that really brought fame and the mass market to" the developer. However, like Igor, it failed to become an international hit. Pendulo went on to be Spain's longest-running game development studio by 2019. A writer for HobbyConsolas called Pendulo a team of "true survivors", but noted that the company "had nothing easy, and even almost disappeared on a couple of occasions". While Pendulo saw its first international breakthrough with Runaway: A Road Adventure (2001), the game suffered major distribution setbacks due to the liquidation of publisher Dinamic Multimedia, which nearly bankrupted Pendulo. The game nevertheless sold over 600,000 units in Europe, which stabilized Pendulo and allowed it to create Runaway 2: The Dream of the Turtle (2006). The company followed up with the adventure titles Runaway: A Twist of Fate, The Next Big Thing and, in 2012, Yesterday. Thereafter, Pendulo entered another period of financial uncertainty, but rebounded with Yesterday Origins in 2016 via publisher Microïds. The two collaborated again on Blacksad: Under the Skin, an adaptation of the Spanish comic series Blacksad. Manuel and Martínez summarized, "Without making too much noise or boasting about it, this humble studio is one of the few national companies that has remained working in a genre and a gaming idea for almost two decades". In 2017, Clara Castaño Ruiz of HobbyConsolas declared Pendulo "synonymous with graphic adventures in Spain" and one of the country's most significant developers. In retrospect, HobbyConsolas named Igor one of the best adventure games ever developed in Spain. Assessing the game's place in Spanish adventure game history by 2008, Julio Gómez found Igors story subpar but nevertheless argued that the "spectacular technical section and well-made puzzles offered an experience nothing short of miraculous". Micromanías Santiago Tejedor highlighted the game in 2015 as an early sign of Pendulo's skill at storytelling and characterization. In 2001, Gerard Masnou of GameLive PC called it "without a doubt [one of] the two best graphic adventures" that Spain had produced, although he wrote in 2003 that it was "not comparable to ... the best adventures published internationally" in its day. A writer for Gry Online echoed this complaint and went further, declaring Igor inferior to the LucasArts titles that had inspired it, despite the quality of the game's puzzles and comedy. Pendulo itself reported that it was still "proud" of Igor by 2001, as it was an important stepping stone toward its more ambitious projects. The company announced the game's release as freeware during the Adventure Developers Online Conference 2007. In 2011, Latiegui said that "we would lie if we said that the idea [of an Igor sequel] has never crossed our minds", but that it was the Pendulo game "with the least chance of coming back". See also 3 Skulls of the Toltecs Dráscula: The Vampire Strikes Back Notes References External links Optik Software page for Igor: Objective Uikokahonia (archived) 1994 video games Adventure games Point-and-click adventure games Freeware Video games developed in Spain DOS games DOS-only games Pendulo Studios games Single-player video games
67534300
https://en.wikipedia.org/wiki/RemotePC
RemotePC
RemotePC is a remote access and remote control software application, developed and owned by IDrive Inc., a software company based in Calabasas, California, United States. Its core function is in enabling remote access and maintenance to computers and other devices. The first version of the software was released in early 2017. Overview RemotePC software was created by the team of IDrive Inc., a private technology company based in Calabasas, California. The app was specifically developed for remote communication and control functions, including text chat, voice, RemotePC Meeting, interactive annotation and more. Remote PC software has been discussed and cited in the technology reviews and by the multiple industry outlets such as Software Advice, Capterra, GetApp (Gartner's subsidiary), TechRadar and PCMag, among others. The application's use accelerated in 2020 and 2021 as the demand for remote work, learning and communication grew during the COVID-19 pandemic. Technology and functionality The app's technology uses TLS v 1.2/AES-256 encryption AES-256 for exchanging data between devices and is compliant with HIPAA and GDPR protocols. It doesn't require any special software for installation and can be accessed directly via the web. According to the GitHub stats, the first version of the RemotePC software was released in January 2017. The application is written in Python (85.9%), HTML (13.1%) and CSS (1%). RemotePC is compatible with PCs and Macs and Linux systems, and has mobile applications for iOS and Android devices. Its main functions include RemotePC Consumer and SoHo, RemotePC Team, RemotePC Enterprise, RemotePC HelpDesk, RemotePC Meeting and RemotePC ScreenShare. See also Comparison of remote desktop software Screen Sharing RFB protocol Remote Desktop Services References RemotePC 2017 software Virtual Network Computing Remote administration software Remote desktop Windows remote administration software MacOS remote administration software Linux remote administration software
1814421
https://en.wikipedia.org/wiki/Audible%20%28service%29
Audible (service)
Audible is an American online audiobook and podcast service that allows users to purchase and stream audiobooks and other forms of spoken word content. This content can be purchased individually or under a subscription model where the user receives "credits" that can be redeemed for content monthly and receive access to a curated on-demand library of content. Audible is the United States' largest audiobook producer and retailer. The service is owned by Audible, a wholly-owned subsidiary of Amazon.com, Inc., headquartered in Newark, New Jersey. History The company's first product was an eponymous portable media player known as the Audible Player; released in 1997, the device contained around 4 megabytes of on-board flash storage, which could hold up to two hours of audio. To use the player, consumers would go online to the official Audible website, download the audiobook, and put it onto the player. On October 24, 1999, Audible suffered a setback when its CEO, Andrew J. Huffman, died. Development proceeded, however, leading to Audible licensing the ACELP codec for its downloads in 2000. In 2003, Audible reached an agreement with Apple Computer. to be the exclusive provider of audiobooks for iTunes Music Store. This agreement ended in 2017 due to antitrust rulings in the European Union. In 2005, Audible released "Audible Air", which allowed users to download audiobooks directly to PDAs and smartphones. Audible Air content would update automatically, downloading chapters as required that would then delete themselves after they had been listened to. On January 31, 2008, Amazon announced they would purchase Audible for about $300 million. In April 2008, Audible began producing exclusive science fiction and fantasy audiobooks under its "Audible Frontiers" imprint. At launch, 25 titles were released. In May 2011, Audible launched Audiobook Creation Exchange (ACX), an online rights marketplace and production platform. The platform has been so successful that in 2012, Audible reported it had received more titles from ACX than from its top three audio providers combined. In March 2012, Audible launched the A-List Collection, a series showcasing Hollywood stars including Claire Danes, Colin Firth, Anne Hathaway, Dustin Hoffman, Samuel L. Jackson, Diane Keaton, Nicole Kidman, and Kate Winslet performing great works of literature. Firth's performance of Graham Greene's The End of the Affair was named Audiobook of the Year at the Audie Awards in 2013. Audible began offering narration workshops it offers at acting schools, including Juilliard and Tisch School of the Arts; in 2013, Audible's CEO speculated that the company was the largest single employer of actors in the New York area. In September 2012, Audible introduced a feature known as "Whispersync for Voice", which allows users to continue audiobooks from where they left off reading them on Amazon Kindle. In 2016, the company announced that it would open a new facility in Newark, New Jersey, the "Innovation Cathedral", in a former church building. In July 2019, a new feature was announced called Audible Captions, in which machine-generated text would be displayed alongside the audio narration. Audible was sued by the Association of American Publishers shortly thereafter for copyright violation. The lawsuit was settled in early 2020, with Audible agreeing not to implement the Captions feature without obtaining express permission. In November 2020, Audible modified its return and exchange policy in response to concerns by authors, who felt that customers were abusing the policy to listen to audiobooks without paying. Content and pricing Audible's content includes more than 200,000 audio programs from leading audiobook publishers, broadcasters, entertainers, magazine and newspaper publishers and business information providers. Content includes books of all genres, as well as radio shows (classic and current), speeches, interviews, stand-up comedy, and audio versions of periodicals such as The New York Times and The Wall Street Journal. Audible offered two monthly subscription tiers, "Audible Gold" and "Audible Platinum", which were priced at US$14.95 and $22.95 respectively: both services allow users to obtain "credits" which can be used to purchase audio books (one whole credit for Gold, and two whole credits on Platinum), while Platinum also included additional incentives such as exclusive discounts. On August 24, 2020, Audible replaced both plans with "Audible Premium Plus" (a renaming of Gold, though with the Platinum pricing and credits grandfathered for existing subscribers), and introduced a new $7.95 subscription tier known as Audible Plus. Both tiers include access to a curated on-demand library of audiobooks, podcasts, and other original productions, while the Audible Plus tier does not include credits. Once a customer has purchased a title, it remains in that customer's library and can be downloaded or streamed at any time. As of April 1, 2019, credits expire one year after they are issued, and credits prior to this day expire after two years. Original content In May 2015, Audible hired Eric Nuzum, formerly VP of programming at NPR, as its SVP of original content development. In 2016, Audible introduced an on-demand service known as "Audible Channels", which features short-form audio programming from various outlets, including news and other original productions. Access to Audible Channels is included as part of Audible's subscription, and also became available to Amazon Prime subscribers. Nuzum compared this strategy to original content created by HBO or Netflix, and stated that the service deliberately avoided use of the word "podcast" as to not alienate listeners who were unfamiliar with the concept. Among its original productions are Where Should We Begin? — a relationship podcast with Esther Perel, Sincerely, X''' — a podcast featuring anonymous TED Talks, Ponzi Supernova — a chronicle of the Madoff investment scandal, The Butterfly Effect — a podcast series by Jon Ronson chronicling the impact of PornHub on internet pornography, and West Cork, a true crime podcast investigating an unsolved 1996 murder in West Cork, Ireland. In August 2018, it was reported that Nuzum was stepping down, and that Amazon had laid off most of the short-form content staff. This move came amid a shift in Audible's original content strategy, including a greater focus on "audiobook-first" deals with writers. Audible's new strategy for original content was announced in Fall 2020 with the debut of a new lower-price tier providing access to "Audible Originals." The new tier, called Premium Plus, provided access at the time of introduction to 11,000 audio titles available only by subscription to Audible. These titles included earlier original material, plus new audio productions featuring such creators as Common, Jamie Lee Curtis, Kate Mara, Harvey Fierstein, Michael Caine and Jesse Eisenberg. Device support Audible audio files are compatible with hundreds of audio players, PDAs, mobile phones and streaming media devices. Devices that do not have AudibleAir capability (allowing users to download content from their library directly into their devices) require a Windows PC or Macintosh to download the files. Additionally, titles can be played on the PC (using iTunes or AudibleManager). Titles cannot be burned to CD with AudibleManager. According to Audible's website, they can be burned to CD using Apple's iTunes and some versions of Nero. (The DRM generally allows a title to be burned to CD once, although the resulting CDs can be played in any CD player and have no copy prevention.) Currently there is no support for Linux, although AudibleManager is known to work through Wine (though this is not officially supported by Audible)., Wine, WineHQ - AudibleManager, Dec 17 2014 Prospective buyers of media players can check the audible.com "Device Center" to verify whether the device will play .aa files, as well as play them at the desired level of audio fidelity. Audible players are available on Apple iPhones, iPods, Android, and Windows Phone devices. The Audible App allows for the downloading and playing of audio books purchased via Audible.com and allows the user to store multiple titles for play on mobile devices using the AA file format developed by Audible. Quality The following qualities have been available from Audible. Currently, only the "Format 4" and "Enhanced" formats are available for download. AAX files are encrypted M4B's. The audio is encoded in variable quality AAC format. While the vast majority of books are encoded at 64 kbit/s, 22.050 kHz, stereo, some are as low as 32k, mono. Radio plays are often encoded at 128kbit/s and 44.1 kHz. Additionally, many audiobooks in Germany are encoded at the latter bitrate and are marketed as "AAX+"; however, there is no difference in the actual file format. Digital rights management Audible's .aa file format encapsulates sound encoded in either MP3 or the ACELP speech codec, but includes unauthorized-playback prevention by means of an Audible username and password, which can be used on up to four computers and three smartphones at a time. Licenses are available for schools and libraries. Audible's content can only be played on selected mobile devices. Its software does enable users to burn a limited number of CDs for unrestricted playback, resulting in CDs that can be copied or converted to unrestricted digital audio formats. Because of the CD issue, Audible's use of digital rights management on its .aa format has earned it criticism. While multiple software products are capable of removing the Audible DRM ''protection by re-encoding in other formats, Audible has been quick to threaten the software makers with lawsuits for discussing or promoting this ability, as happened with River Past Corp and GoldWave Inc. Responses have varied, with River Past removing the capability from their software, and GoldWave retaining the capability, but censoring discussions about the ability in its support forums. But there are still many other software tools from non-US countries which easily bypass the DRM control of Audible by various methods, including sound recording, virtual CD burning, and even using a media plugin library once provided by Audible themselves. After Apple's abandonment of most DRM measures, Amazon's downloads ceasing to use it, Audible's DRM system is one of the few remaining in place. Many Audible listings displayed to non-U.S. customers are geo-blocked. According to Audible, this is because the publisher who has provided the title does not have the rights to distribute the file in a given region. When a user is logged in, titles that he or she cannot purchase will be hidden. There were hopes that Amazon, after its purchase of Audible, would remove the DRM from its audiobook selection, in keeping with the current trend in the industry. Nevertheless, Audible's products continue to have DRM, similar to the policy of DRM-protecting their Kindle e-books, which have DRM that allows for a finite, yet undisclosed number of downloads at the discretion of the publisher, however Audible titles that are DRM free can be copied to the Kindle and made functional. Audible is able to offer DRM-free titles for content providers who wish to do so. FFmpeg 2.8.1+ is capable of playing Audible's .aa and .aax file formats natively. Controversies Due to a large number of complaints about ACX (which is the Amazon company behind Audible), a large-scale campaign was launched in February 2021 to raise awareness of the issues. It is known as #Audiblegate. Market power Audible operates the Audiobook Creation Exchange, which enables individual authors or publishers to work with professional actors and producers to create audiobooks, which are then distributed to Amazon and iTunes. The service is available to residents of the United States and the United Kingdom. Audible produces 10,000 titles a year and may be the largest employer of actors in New York City. See also RBMedia LibriVox Storytel References External links Behind-the-Scenes Tour of Audible, by Katherine Kellgren Amazon (company) acquisitions Audiobook companies and organizations Retail companies established in 1995 Internet properties established in 1997 Online retailers of the United States Companies based in Newark, New Jersey Mass media in Newark, New Jersey IOS software Android (operating system) software Windows Phone software Windows software Universal Windows Platform apps 2008 mergers and acquisitions Subscription services YouTube sponsors
13671787
https://en.wikipedia.org/wiki/Seapine%20Software
Seapine Software
Seapine Software was a privately held Mason, Ohio-based software and services company. The company developed a suite of software products that managed the full software development lifecycle. Seapine's tools included testing tools, configuration management, test-case management, and requirements management. The company was best known for its TestTrack line of application lifecycle management (ALM) software. The company was acquired in 2016 by Minneapolis, Minnesota-based Perforce Software, and TestTrack was rebranded as Helix ALM. History Seapine was established in 1995 by Rick and Kelly Riccetti. The company shipped their first product, TestTrack Pro, in 1996. In 2012, Seapine built a new 50,000 square-foot technology headquarters in Mason, OH for their 100+ employees. In 2016, the company was acquired by Minneapolis, Minnesota-based Perforce Software. Six months after the acquisition, Perforce renamed TestTrack as Helix ALM to match other products in Perforce's suite. Awards and recognition Seapine was on the SD Times 100 list in 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, and 2015. In 2013, Seapine was rated a Champion in the application lifecycle management space by Info-Tech Research Group. Seapine products had won various Jolt Awards, including its QA Wizard in 2011, Surround SCM in 2008, and TestTrack Studio in 2007. See also List of revision control software Comparison of issue tracking systems References External links Quality and Transparent ALM Development software companies
41730846
https://en.wikipedia.org/wiki/Counter-Strike
Counter-Strike
Counter-Strike (CS) is a series of multiplayer first-person shooter video games in which teams of terrorists battle to perpetrate an act of terror (bombing, hostage-taking, assassination) while counter-terrorists try to prevent it (bomb defusal, hostage rescue, escort mission). The series began on Windows in 1999 with the release of the first game, Counter-Strike. It was initially released as a modification ("mod") for Half-Life that was designed by Minh "Gooseman" Le and Jess "Cliffe" Cliffe before the rights to the mod's intellectual property were acquired by Valve, the developers of Half-Life, who then turned Counter-Strike into a retail product. The original Counter-Strike was followed by Counter-Strike: Condition Zero, developed by Turtle Rock Studios and released in March 2004. A previous version of Condition Zero that was developed by Ritual Entertainment was released alongside it as Condition Zero: Deleted Scenes. Eight months later, Valve released Counter-Strike: Source, a remake of the original Counter-Strike and the first in the series to run on Valve's newly created Source engine. The fourth game in the main series, Counter-Strike: Global Offensive, was released by Valve in 2012 for Windows, OS X, Xbox 360, and PlayStation 3. Hidden Path Entertainment, who worked on Counter-Strike: Source post-release, helped to develop the game alongside Valve. There have been several third-party spin-off titles created for Asian markets over the years. These include the Counter-Strike Online series, Counter-Strike Neo, and Counter-Strike Nexon: Studio. Gameplay Counter-Strike is an objective-based, multiplayer first-person shooter. Two opposing teams—the Terrorists and the Counter Terrorists—compete in game modes to complete objectives, such as securing a location to plant or defuse a bomb and rescuing or guarding hostages. At the end of each round, players are rewarded based on their individual performance with in-game currency to spend on more powerful weapons in subsequent rounds. Winning rounds results in more money than losing, and completing objectives such as killing enemy players gives cash bonuses. Uncooperative actions, such as killing teammates, results in a penalty. Main series Counter-Strike Originally a modification for Half-Life, the rights to Counter-Strike, as well as the developers working on it, were acquired by Valve in 2000. The game received a port to Xbox in 2003. It was also ported to OS X and Linux in the form of a beta in January 2013. A full release was published in April 2013. Condition Zero Counter-Strike was followed-up with Counter-Strike: Condition Zero, developed by Turtle Rock Studios and released in 2004. It used the Half-Life GoldSrc engine, similar to its predecessor. Besides the multiplayer mode, it also included a single-player mode with a "full" campaign and bonus levels. The game received mixed reviews in contrast to its predecessor and was quickly followed with a further entry to the series titled Counter-Strike: Source. Source Counter-Strike: Source was the first publicly released game by Valve to run on the Source engine. Counter-Strike: Source was initially released as a beta to members of the Valve Cyber Café Program on August 11, 2004. On August 18, 2004, the beta was released to owners of Counter-Strike: Condition Zero and those who had received a Half-Life 2 voucher bundled with some ATI Radeon video cards. While the original release only included a version for Microsoft Windows, the game eventually received a port to OS X on June 23, 2010, with a Linux port afterwards in 2013. Global Offensive Counter-Strike: Global Offensive was the fourth release in the main, Valve-developed Counter-Strike series in 2012. Much like Counter-Strike: Source the game runs on the Source engine. It is available on Microsoft Windows, OSX, and Linux, as well as the Xbox 360 and PlayStation 3 consoles, and is backwards compatible on the Xbox One console. Spin-offs Neo A Japanese arcade adaptation of Counter-Strike, the original Half-Life multiplayer modification. It is published by Namco, and runs on a Linux system. The game involves anime-designed characters in a futuristic designed version of Counter-Strike. A selection of single-player missions, mini-games, and seasonal events were added to prolong the game's interest with players. Online series Counter-Strike Online is a free-to-play spin-off available in much of eastern Asia. It was developed by Nexon, with oversight from Valve. It uses a micropayment model that is managed by a custom version of the Steam back-end. Announced in 2012 and aimed at the Asian gaming market, a sequel titled Counter-Strike Online 2 was developed by Nexon on the Source game engine, and released in 2013. Nexon: Studio In August 2014, Nexon announced Counter-Strike Nexon: Zombies, a free-to-play, zombie-themed spin-off, developed on the GoldSrc game engine. On September 23, 2014, an open beta was released on Steam. The game launched on October 7, 2014, featuring 50 maps and 20 game modes. The game features both player versus player modes such as team deathmatch, hostage rescue, bomb defusal, and player versus environment modes such as cooperative campaign missions and base defending. Reception from critics was generally negative with criticism aimed at the game's poor user interface, microtransactions, and dated graphics. On October 30, 2019, Counter-Strike Nexon: Zombies was renamed to Counter-Strike Nexon: Studio. Competitive play Counter-Strike has had nearly 20 years of competitive play beginning with the original Counter-Strike. The first major tournament was hosted in 2001 at the Cyberathlete Professional League. Cyberathlete Professional League, along with World Cyber Games and Electronic Sports World Cup were among the largest tournaments for the Counter-Strike series until 2007. From 2013 to 2020, Valve have hosted Counter-Strike: Global Offensive Major Championships with large prizes. These have become the most prestigious tournaments in CS:GO. Reception Counter-Strike is considered one of the most influential first person shooters in history. The series has a large competitive community and has become synonymous with first person shooters. , the Counter-Strike franchise has sold over 25 million units. References First-person shooters Video game franchises introduced in 1999 Video game franchises Online games Valve Corporation franchises Valve Corporation games Asymmetrical multiplayer video games Video games about bomb disposal
3438555
https://en.wikipedia.org/wiki/Corel%20Presentations
Corel Presentations
Corel Presentations (which is often referred to simply as Presentations) is a presentation program akin to Microsoft PowerPoint and LibreOffice Impress. The current release, version 2020 (Release 20 internally), is available only as part of the Corel's WordPerfect Office productivity suite. History Presentations shares much of its code with WordPerfect. It originally evolved from DrawPerfect, a MS-DOS-based drawing program released in 1990 by the now-defunct WordPerfect Corporation. The first version, WordPerfect Presentations 2.0 for DOS, appeared in 1993, and was followed by a Microsoft Windows port of the DOS version a few months later. Due to severe usability and performance issues, the first Windows version was not considered a serious contender in the market. Novell acquired WordPerfect Corporation in April 1994 and shipped an upgrade of Presentations, Novell Presentations 3.0 for Windows, as part of the Novell PerfectOffice 3.0 for Windows productivity suite in December 1994. Corel acquired PerfectOffice in January 1996 and released the first 32-bit version of Presentations in May of that year. Since then, the company has issued twelve upgrades: version 8 (1997), version 9 (1999), version 10 (2001), version 11 (2003), version 12 (2004), version 13 (2006), version 14 (2008), version 15 (2010), version 16 (2012), version 17 (2014), version 18 (2016), version 19 (2018) and version 20 (2020). The last DOS release, version 2.1, appeared in 1997 as part of the Corel WordPerfect Suite for DOS. Corel Presentations for Linux was included with the various editions of Corel WordPerfect Office for Linux. Corel no longer develops programs for the Linux operating system. User interface Over the years, the program's interface has evolved to more closely resemble that of Microsoft PowerPoint. Its primary strengths remain in the areas of graphics manipulation, although it does include a number of advanced transition and animation effects not found in its competitors. The program also has the ability to save to Microsoft PowerPoint and PDF formats, as well as to publish presentations to the Internet. Features Corel Presentations can be a handy tool for creating effective and in-depth presentations. This program includes a number of templates and different types of slide shows to help to lay out an end user's show properly. The templates include a set background, font, color and set up of the slide. These defaults can be changed within the slide show. It has always included a Bitmap Editor which works like a paintbrush program and even allows you to alter individual pixels which might come in handy for UI and game developers. See also Comparison of office suites Corel software Presentation software Presentation software for Windows
17197166
https://en.wikipedia.org/wiki/SplendidCRM
SplendidCRM
SplendidCRM produces open source Customer Relationship Management (CRM) software developed in C# for the ASP.NET framework. The product is available in two editions - Open Source and Professional - the main difference between the two products is the inclusion of stored procedures and source code for a MS Outlook 2003/2007 plug-in in the Professional edition. History SplendidCRM, Inc. was founded in November 2005 by Paul Rony. The project began as an experiment to test the effectiveness of the Microsoft .NET development environment at porting an application from the LAMP software bundle to the Microsoft ASP.NET framework. It was soon noted that there were few initiatives at the time dedicated to promoting Open Source and ASP.NET from a CRM perspective and SplendidCRM rapidly gained a following in the .NET developer community. It is a featured community project on the official ASP.NET website. SplendidCRM adopted the LAMP based SugarCRM Open Source as the blueprint on which to initially model its ASP.NET application. The application is compatible with and uses SugarCRM assets such as icons and terminology packs. However the actual source code that makes up the application is completely original. SplendidCRM complies with the terms of the Sugar Public License 1.1.3. SplendidCRM first appeared on Codeplex on 28 June 2006 and has been downloaded in excess of 4000 times, however this does not include the volume of downloads directly from the SplendidCRM corporate website. SplendidCRM is based on but is not a competitor of SugarCRM. SplendidCRM targets the Microsoft-centric small business sector where Microsoft CRM is too expensive and complex to implement. SplendidCRM may be taking on Microsoft CRM with Microsoft's own tools. Product SplendidCRM is an open-source customer relationship management application built on the Microsoft .NET framework. The product can be deployed in a variety of scenarios including Software-as-a-service. Licensing SplendidCRM is one of the new breed of Microsoft-based open-source projects. It is unusual because at present it features a developer based licensing model for its professional edition, which means that system integrators wishing to adopt SplendidCRM only need to pay for the number of software developers working on their project - rather than the number of ultimate end users. However, companies that adopt SplendidCRM for internal use must adhere to the per user fee for all their users. SugarCRM has now been released under the GPLv3, although when the port was made SugarCRM was licensed under its own Sugar Public License 1.1.3 (SPL). Following criticism from the Open Source Initiative SugarCRM decided to adopt the GPLv3 to license its community edition (open source) product on July 25, 2007. The SPL is a license that was derived from the Mozilla Public License, and as such is a file-based license; the move to GPLv3 by SugarCRM is seen to be a positive step by their developer community, and consequently also benefits the developers of ported solutions such as SplendidCRM. SplendidCRM complied with the SPL, and was only able to sell a Professional Version because the Database Views and Stored Procedures (SPs) were developed completely independently of SugarCRM. These are not available in the current Open Source version of SplendidCRM. The move by SugarCRM to GPLv3 means that once SplendidCRM has also moved to GPLv3, ASP.NET developers will be able to use SplendidCRM as a framework for developing their own products and solutions without the restrictions previously experienced under the SPL. See also SugarCRM Web application Epesi References External links Official website SplendidCRM on CodePlex Software companies based in North Carolina CRM software companies Free customer relationship management software Free software companies Companies established in 2005 Companies based in North Carolina Software companies of the United States
28118704
https://en.wikipedia.org/wiki/Nightingale%20%28software%29
Nightingale (software)
Nightingale is a discontinued free, open source audio player and web browser based on the Songbird media player source code. As such, Nightingale's engine is based on the Mozilla XULRunner with libraries such as the GStreamer media framework and libtag providing media tagging and playback support, amongst others. Since official support for Linux was dropped by Songbird in April 2010, Linux-using members of the Songbird community diverged and created the project. By contrast to Songbird, which is primarily licensed under the GPLv2 but includes artwork that is not freely distributable, Nightingale is free software, licensed under the GPLv2, with portions under the MPL and BSD licenses. Nightingale has not seen a new release since 2014 and most, if not all, Nightingale developers are no longer actively contributing to its development. Notable features Plugins compatible with Songbird (with one modification to the addon) Multi-platform compatibility with Windows XP, Vista, 7, 10, Linux and Mac OS X v10.5 (x86, x86-64). Ability to play multiple audio formats, such as MP3, AAC, Ogg Vorbis, FLAC, Apple Lossless and WMA Ability to play Apple FairPlay-encoded audio on Windows and Mac platforms via hooks into QuickTime (authorization takes place in iTunes) Ability to play Windows Media DRM audio on Windows platforms A skinnable interface, with skins called "feathers" Media files stored on pages viewed in the browser show up as playable files in Nightingale MP3 file download Ability to subscribe to MP3 blogs as playlists Ability to build custom mixes Ability to scan the user's computer for all audio files and add them to a local library A configurable and collapsible graphical user interface similar to iTunes, and mini-player mode Keyboard shortcuts and media keyboard support Last.fm integration via a plugin, complete with love/hate buttons Microsoft MTP compatible device support Ability to edit and save metadata tags Gapless playback and ReplayGain Watch folders Media import and export (from and to iTunes) Automatic Library Files Organization Add-ons Extensions Users can add features and change functionality in Nightingale by installing extensions. Extensions are similar to the Extensions for the Firefox browser and can be easily ported. Community coded extensions are available on The Nightingale Addons Page. Skins Skins are referred to as "feathers" in Nightingale, and give users and artists the ability to change the look of Nightingale via an extension which generates a default skin. Using CSS (and optionally XUL), and an image manipulation program such as Photoshop or GIMP, users are then able to make Nightingale look however they want. References External links Frenchbirds, a French Nightingale community 2006 software Free software projects Free software programmed in C++ Free media players Free web browsers Gecko-based software Gopher clients IPod software Jukebox-style media players macOS media players Solaris media players Tag editors Windows media players Windows web browsers Portable software Linux media players Software that uses GStreamer Software that uses XUL Discontinued web browsers
17058007
https://en.wikipedia.org/wiki/Component%20Object%20Model
Component Object Model
Component Object Model (COM) is a binary-interface standard for software components introduced by Microsoft in 1993. It is used to enable inter-process communication object creation in a large range of programming languages. COM is the basis for several other Microsoft technologies and frameworks, including OLE, OLE Automation, Browser Helper Object, ActiveX, COM+, DCOM, the Windows shell, DirectX, UMDF and Windows Runtime. The essence of COM is a language-neutral way of implementing objects that can be used in environments different from the one in which they were created, even across machine boundaries. For well-authored components, COM allows reuse of objects with no knowledge of their internal implementation, as it forces component implementers to provide well-defined interfaces that are separated from the implementation. The different allocation semantics of languages are accommodated by making objects responsible for their own creation and destruction through reference-counting. Type conversion casting between different interfaces of an object is achieved through the QueryInterface method. The preferred method of "inheritance" within COM is the creation of sub-objects to which method "calls" are delegated. COM is an interface technology defined and implemented as standard only on Microsoft Windows and Apple's Core Foundation 1.3 and later plug-in application programming interface (API). The latter only implements a subset of the whole COM interface. For some applications, COM has been replaced at least to some extent by the Microsoft .NET framework, and support for Web Services through the Windows Communication Foundation (WCF). However, COM objects can be used with all .NET languages through .NET COM Interop. Networked DCOM uses binary proprietary formats, while WCF encourages the use of XML-based SOAP messaging. COM is very similar to other component software interface technologies, such as CORBA and Enterprise JavaBeans, although each has its own strengths and weaknesses. Unlike C++, COM provides a stable application binary interface (ABI) that does not change between compiler releases. This makes COM interfaces attractive for object-oriented C++ libraries that are to be used by clients compiled using different compiler versions. History One of the first methods of interprocess communication in Windows was Dynamic Data Exchange (DDE), first introduced in 1987, that allowed sending and receiving messages in so-called "conversations" between applications. Antony Williams, who was involved in the creation of the COM architecture, later distributed two internal papers in Microsoft that embraced the concept of software components: Object Architecture: Dealing With the Unknown – or – Type Safety in a Dynamically Extensible Class Library in 1988 and On Inheritance: What It Means and How To Use It in 1990. These provided the foundation of many of the ideas behind COM. Object Linking and Embedding (OLE), Microsoft's first object-based framework, was built on top of DDE and designed specifically for compound documents. It was introduced with Word for Windows and Excel in 1991, and was later included with Windows, starting with version 3.1 in 1992. An example of a compound document is a spreadsheet embedded in a Word for Windows document: as changes are made to the spreadsheet within Excel, they appear automatically inside the Word document. In 1991, Microsoft introduced Visual Basic Extensions (VBX) with Visual Basic 1.0. A VBX is a packaged extension in the form of a dynamic-link library (DLL) that allows objects to be graphically placed in a form and manipulated by properties and methods. These were later adapted for use by other languages such as Visual C++. In 1992, when version 3.1 of Windows was released, Microsoft released OLE 2 with its underlying object model. The COM Application binary interface (ABI) was the same as the MAPI ABI (released in 1992), and like it was based on MSRPC and ultimately on the Open Group's DCE/RPC. While OLE 1 was focused on compound documents, COM and OLE 2 were designed to address software components in general. Text conversations and Windows messages had proved not to be flexible enough to allow sharing application features in a robust and extensible way, so COM was created as a new foundation, and OLE changed to OLE2. In 1994 OLE custom controls (OCXs) were introduced as the successor to VBX controls. At the same time, Microsoft stated that OLE 2 would just be known as "OLE", and that OLE was no longer an acronym, but a name for all of the company's component technologies. In early 1996, Microsoft found a new use for OLE Custom Controls, expanding their Web browser's capability to present content, renamed some parts of OLE relating to the Internet "ActiveX", and gradually renamed all OLE technologies to ActiveX, except the compound document technology that was used in Microsoft Office. Later that year, Microsoft extended COM to work across the network with DCOM. Related technologies COM was the major software development platform for Windows and, as such, influenced development of a number of supporting technologies. It was likewise heavily influenced by earlier technologies. DDE COM replaced DDE as the preferred form of interprocess communication. DCE/RPC and MSRPC As a cross-language component model, COM relies on an interface definition language, or IDL, to describe the objects and associated functions. The COM IDL is based heavily on the feature-rich DCE/RPC IDL, with object-oriented extensions. Microsoft's own implementation of DCE/RPC, known as MSRPC, is heavily used as the primary inter-process communication mechanism for Windows NT services and internal components, making it an obvious choice of foundation. DCOM DCOM (Distributed COM) extended the reach of COM from merely supporting a single user with separate applications communicating on the Windows desktop, to activating objects running under different security contexts, and on different machines across the network. With this were added necessary features for configuring which users have authority to create, activate and call objects, for identifying the calling user, as well as specifying required encryption for security of calls. COM+ In order for Microsoft to provide developers with support for distributed transactions, resource pooling, disconnected applications, event publication and subscription, better memory and processor (thread) management, as well as to position Windows as an alternative to other enterprise-level operating systems, Microsoft introduced a technology called Microsoft Transaction Server (MTS) on Windows NT 4. With Windows 2000, that significant extension to COM was incorporated into the operating system (as opposed to the series of external tools provided by MTS) and renamed COM+. At the same time, Microsoft de-emphasized DCOM as a separate entity. Components that made use of COM+ services were handled more directly by the added layer of COM+, in particular by operating system support for interception. In the first release of MTS, interception was tacked on - installing an MTS component would modify the Windows Registry to call the MTS software, and not the component directly. Windows 2000 also revised the Component Services control panel application used to configure COM+ components. An advantage of COM+ was that it could be run in "component farms". Instances of a component, if coded properly, could be pooled and reused by new calls to its initializing routine without unloading it from memory. Components could also be distributed (called from another machine). COM+ and Microsoft Visual Studio provided tools to make it easy to generate client-side proxies, so although DCOM was used to make the remote call, it was easy to do for developers. COM+ also introduced a subscriber/publisher event mechanism called COM+ Events, and provided a new way of leveraging MSMQ (a technology that provides inter-application asynchronous messaging) with components called Queued Components. COM+ events extend the COM+ programming model to support late-bound (see Late binding) events or method calls between the publisher or subscriber and the event system. .NET Microsoft .NET provides means both to provide component technology, and to interact with COM+ (via COM-interop-assemblies); .NET provides wrappers to most of the commonly used COM controls. Microsoft .NET hides most detail from component creation and therefore eases development. .NET can leverage COM+ via the System.EnterpriseServices namespace, and several of the services that COM+ provides have been duplicated in recent releases of .NET. For example, the System.Transactions namespace in .NET provides the TransactionScope class, which provides transaction management without resorting to COM+. Similarly, queued components can be replaced by Windows Communication Foundation with an MSMQ transport. (MSMQ is a native COM component, however.) There is limited support for backward compatibility. A COM object may be used in .NET by implementing a Runtime Callable Wrapper (RCW). NET objects that conform to certain interface restrictions may be used in COM objects by calling a COM callable wrapper (CCW). From both the COM and .NET sides, objects using the other technology appear as native objects. See COM Interop. WCF (Windows Communication Foundation) eases a number of COM's remote execution challenges. For instance, it allows objects to be transparently marshalled by value across process or machine boundaries more easily. Windows Runtime Microsoft's new Windows Runtime (or WinRT, not to be confused with Windows RT) programming and application model is essentially a COM-based API, although it relies on an enhanced COM. Because of its COM-like basis, Windows Runtime allows relatively easy interfacing from multiple languages, just as COM does, but it is essentially an unmanaged, native API. The API definitions are, however, stored in ".winmd" files, which are encoded in ECMA 335 metadata format, the same CLI metadata format that .NET uses with a few modifications. This common metadata format allows for significantly less overhead than P/Invoke when WinRT is invoked from .NET applications, and its syntax is much simpler. Nano-COM (a.k.a XPCOM) Nano-COM is an extremely small subset of the Component Object Model that is focused exclusively on the Application Binary Interface (ABI) aspects of COM to that enable function and method calls across independently compiled modules/components. Nano-COM can be expressed easily in a single C++ header file that is portable to all C++ compilers. Nano-COM extends the native ABI of the underlying instruction architecture and OS to add support for typed object references (typical ABIs focus only on atomic types, structures, arrays and function calling conventions). The basis of Nano-COM was used by Mozilla to bootstrap Firefox (called XPCOM), and is currently in use as the base ABI technology for DirectX/Direct3D/DirectML. A Nano-COM header file defines or names at least three types: GUID to identify interface types - this is effectively a 128 bit number HRESULT to identify error codes from method calls - this is effectively a standardized usage of 32-bit ints to well known values (S_OK, E_FAIL, E_OUTOFMEMORY, etc) IUnknown as the base type for all typed object references - this is effectively abstract virtual functions to support dynamic_cast<T>-style acquisition of new interface types and ref counting a la shared_ptr<T> Many uses of Nano-COM also define two functions to address callee-allocated memory buffers as results <NanoCom>Alloc – called by method implementations to allocate raw buffers (not objects) that are returned to the caller <NanoCom>Free – called by method callers to free callee-allocated buffers once no longer in use Some implementations of Nano-COM such as Direct3D eschew the allocator functions and restrict themselves to only use caller-allocated buffers. Nano-COM has no notion of classes, apartments, marshaling, registration, etc. Rather, object references are simply passed across function boundaries and allocated via standard language constructs (e.g., C++ new operator). Security COM and ActiveX components are run as native code on the user's machine, with no sandboxing. There are therefore few restrictions on what the code can do. The prior practice of embedding ActiveX components on web pages with Internet Explorer did therefore lead to problems with malware infections. Microsoft recognized the problem with ActiveX as far back as 1996 when Charles Fitzgerald said, "We never made the claim up front that ActiveX is intrinsically secure". Recent versions of Internet Explorer prompt the user before installing ActiveX controls, enabling the user to disallow installation of controls from sites that the user does not trust. The ActiveX controls are signed with digital signatures to guarantee their authenticity. It is also possible to disable ActiveX controls altogether, or to allow only a selected few. The transparent support for out-of-process COM servers still promotes software safety in terms of process isolation. This can be useful for decoupling subsystems of large application into separate processes. Process isolation limits state corruption in one process from negatively affecting the integrity of the other processes, since they only communicate through strictly defined interfaces. Thus, only the affected subsystem needs to be restarted in order to regain valid state. This is not the case for subsystems within the same process, where a rogue pointer in one subsystem can randomly corrupt other subsystems. Technical details COM programmers build their software using COM-aware components. Different component types are identified by class IDs (CLSIDs), which are Globally Unique Identifiers (GUIDs). Each COM component exposes its functionality through one or more interfaces. The different interfaces supported by a component are distinguished from each other using interface IDs (IIDs), which are GUIDs too. COM interfaces have bindings in several languages, such as C, C++, Visual Basic, Delphi, Python and several of the scripting languages implemented on the Windows platform. All access to components is done through the methods of the interfaces. This allows techniques such as inter-process, or even inter-computer programming (the latter using the support of DCOM). Interfaces All COM components implement the IUnknown (custom) interface, which exposes methods for reference counting and type conversion (casting). A custom IUnknown interface consists of a pointer to a virtual method table that contains a list of pointers to the functions that implement the functions declared in the interface, in the same order that they are declared in the interface. The in-process invocation overhead is therefore comparable to virtual method calls in C++. In addition to custom interfaces, COM also supports dispatch interfaces inheriting from IDispatch. Dispatch interfaces support late binding for OLE Automation. This allows dispatch interfaces to be natively accessed from a wider range of programming languages than custom interfaces. Classes A COM class ("coclass") is a concrete implementation of one or more interfaces, and closely resembles classes in object-oriented programming languages. Classes are created based on their class ID (CLSID) or based on their programmatic identifier string (ProgID). Like many object-oriented languages, COM provides a separation of interface from implementation. This distinction is especially strong in COM, where objects cannot be accessed directly, but only through their interfaces. COM also has support for multiple implementations of the same interface, so that clients at runtime can choose which implementation of an interface to instantiate. Interface Definition Language and type libraries Type libraries contain metadata to represent COM types. These types are described using Microsoft Interface Definition Language (MSIDL/IDL). IDL files define object-oriented classes, interfaces, structures, enumerations and other user-defined types in a language independent manner. IDL is similar in appearance to C++ declarations with some additional keywords such as "interface" and "library" for defining interfaces and collections of classes. IDL also supports the use of bracketed attributes before declarations to provide additional information, such as interface GUIDs and the relationships between pointer parameters and length fields. IDL files are compiled by the MIDL compiler. For C/C++, the MIDL compiler generates a compiler-independent header file containing struct definitions to match the vtbls of the declared interfaces and a C file containing declarations of the interface GUIDs. C++ source code for a proxy module can also be generated by the MIDL compiler. This proxy contains method stubs for converting COM calls into remote procedure calls to enable DCOM for out-of-process communication. IDL files can also be compiled by the MIDL compiler into a type library (TLB). TLB files contain binary metadata that can be processed by different language compilers and runtime environments (e.g. VB, Delphi, .NET etc.) to generate language-specific constructs to represent the COM types defined in the TLB. For C++, this will convert the TLB back to its IDL representation. Object framework Because COM is a runtime framework, types have to be individually identifiable and specifiable at runtime. To achieve this, globally unique identifiers (GUIDs) are used. Each COM type is designated its own GUID for identification at runtime. In order for information on COM types to be accessible at both compile time and runtime, COM uses type libraries. It is through the effective use of type libraries that COM achieves its capabilities as a dynamic framework for the interaction of objects. Consider the following example coclass definition in an IDL: coclass SomeClass { [default] interface ISomeInterface; }; The above code fragment declares a COM class named SomeClass which implements an interface named ISomeInterface. This is conceptually equivalent to defining the following C++ class: class SomeClass : public ISomeInterface { ... ... }; where ISomeInterface is a C++ pure virtual class (sometimes called an abstract base class). The IDL files containing COM interfaces and classes are compiled into type libraries (TLB) files, which can later be parsed by clients at runtime to determine which interfaces an object supports, and invoke an object's interface methods. In C++, COM objects are instantiated with the CoCreateInstance function that takes the class ID (CLSID) and interface ID (IID) as arguments. Instantiation of SomeClass can be implemented as follows: ISomeInterface* interface_ptr = NULL; HRESULT hr = CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_ALL, IID_ISomeInterface, (void**)&interface_ptr); In this example, the COM sub-system is used to obtain a pointer to an object that implements ISomeInterface interface, and coclass CLSID_SomeClass's particular implementation of this interface is required. Reference counting All COM objects utilize reference counting to manage object lifetimes. The reference counts are controlled by the clients through the AddRef and Release methods in the mandatory IUnknown interface that all COM objects implement. COM objects are then responsible for freeing their own memory when the reference count drops to zero. Certain languages (e.g. Visual Basic) provide automatic reference counting so that COM object developers need not explicitly maintain any internal reference counter in their source codes. In C++, a coder may either perform explicit reference counting or use smart pointers to automatically manage the reference counts. The following are guidelines for when to call AddRef and Release on COM objects: Functions and methods that return interface references (via return value or via "out" parameter) shall increment the reference count of the returned object before returning. Release must be called on an interface pointer before the pointer is overwritten or goes out of scope. If a copy is made on an interface reference pointer, AddRef should be called on that pointer. AddRef and Release must be called on the specific interface which is being referenced since an object may implement per-interface reference counts in order to allocate internal resources only for the interfaces which are being referenced. Not all reference count calls are sent to remote objects over the wire; a proxy keeps only one reference on the remote object and maintains its own local reference count. To simplify COM development, Microsoft introduced ATL (Active Template Library) for C++ developers. ATL provides for a higher-level COM development paradigm. It also shields COM client application developers from the need to directly maintain reference counting, by providing smart pointer objects. Other libraries and languages that are COM-aware include the Microsoft Foundation Classes, the VC Compiler COM Support, VBScript, Visual Basic, ECMAScript (JavaScript) and Borland Delphi. Programming COM is a language agnostic binary standard that can be developed in any programming language capable of understanding and implementing its binary defined data types and interfaces. COM implementations are responsible for entering and leaving the COM environment, instantiating and reference-counting COM objects, querying objects for supported interfaces, as well as handling errors. The Microsoft Visual C++ compiler supports extensions to the C++ language referred to as C++ Attributes. These extensions are designed to simplify COM development and remove much of the boilerplate code required to implement COM servers in C++. Registry usage In Windows, COM classes, interfaces and type libraries are listed by GUIDs in the registry, under HKEY_CLASSES_ROOT\CLSID for classes and HKEY_CLASSES_ROOT\Interface for interfaces. COM libraries use the registry to locate either the correct local libraries for each COM object or the network location for a remote service. Registration-free COM Registration-Free COM (RegFree COM) is a technology introduced with Windows XP that allows Component Object Model (COM) components to store activation metadata and CLSID (Class ID) for the component without using the registry. Instead, the metadata and CLSIDs of the classes implemented in the component are declared in an assembly manifest (described using XML), stored either as a resource in the executable or as a separate file installed with the component. This allows multiple versions of the same component to be installed in different directories, described by their own manifests, as well as XCOPY deployment. This technique has limited support for EXE COM servers and cannot be used for system-wide components such as MDAC, MSXML, DirectX or Internet Explorer. During application loading, the Windows loader searches for the manifest. If it is present, the loader adds information from it to the activation context. When the COM class factory tries to instantiate a class, the activation context is first checked to see if an implementation for the CLSID can be found. Only if the lookup fails, the registry is scanned. Manually instantiating COM objects COM objects can also be created manually, given the path of the DLL file and GUID of the object. This does not require the DLL or GUID to be registered in the system registry, and does not make use of manifest files. A COM DLL exports a function named DllGetClassObject. Calling DllGetClassObject with the desired GUID and IID_IClassFactory provides an instance of a factory object. The Factory object has a CreateInstance method, which can create instances of an object given an interface GUID. This is the same process internally used when creating instances of registered COM components. If the created COM object instantiates another COM object using the generic CoCreateInstance API, it will try to do so in the usual generic way, using the registry or manifest files. But it can create internal objects (which may not be registered at all), and hand out references to interfaces to them, using its own private knowledge. Process and network transparency COM objects can be transparently instantiated and referenced from within the same process (in-process), across process boundaries (out-of-process), or remotely over the network (DCOM). Out-of-process and remote objects use marshalling to serialize method calls and return values over process or network boundaries. This marshalling is invisible to the client, which accesses the object as if it were a local in-process object. Threading In COM, threading is addressed through a concept known as apartments. An individual COM object lives in exactly one apartment, which might either be single-threaded or multi-threaded. There are three types of apartments in COM: Single-Threaded Apartment (STA), Multi-Threaded Apartment (MTA), and Thread Neutral Apartment (NA). Each apartment represents one mechanism whereby an object's internal state may be synchronized across multiple threads. A process can consist of multiple COM objects, some of which may use STA and others of which may use MTA. All threads accessing COM objects similarly live in one apartment. The choice of apartment for COM objects and threads are determined at run-time, and cannot be changed. Threads and objects which belong to the same apartment follow the same thread access rules. Method calls which are made inside the same apartment are therefore performed directly without any assistance from COM. Method calls made across apartments are achieved via marshalling. This requires the use of proxies and stubs. Criticisms Since COM has a fairly complex implementation, programmers can be distracted by some of the "plumbing" issues. Message pumping When an STA is initialized it creates a hidden window that is used for inter-apartment and inter-process message routing. This window must have its message queue regularly "pumped". This construct is known as a "message pump". On earlier versions of Windows, failure to do so could cause system-wide deadlocks. This problem is complicated by some Windows APIs that initialize COM as part of their implementation, which causes a "leak" of implementation details. Reference counting Reference counting within COM may cause problems if two or more objects are circularly referenced. The design of an application must take this into account so that objects are not left orphaned. Objects may also be left with active reference counts if the COM "event sink" model is used. Since the object that fires the event needs a reference to the object reacting to the event, the latter's reference count will never reach zero. Reference cycles are typically broken using either out-of-band termination or split identities. In the out-of-band termination technique, an object exposes a method which, when called, forces it to drop its references to other objects, thereby breaking the cycle. In the split identity technique, a single implementation exposes two separate COM objects (also known as identities). This creates a weak reference between the COM objects, preventing a reference cycle. DLL Hell Because in-process COM components are implemented in DLL files and registration only allows for a single version per CLSID, they might in some situations be subject to the "DLL Hell" effect. Registration-free COM capability eliminates this problem for in-process components; registration-free COM is not available for out-of-process servers. See also Portable object (computing) cross language cross platform Object Model definition Distributed Component Object Model (DCOM), extension making COM able to work in networks Common Language Infrastructure current .NET cross language cross platform Object Model Windows Runtime, an application model, evolved version of COM targeting Windows 8 CORBA Common Object Request Broker Architecture, open cross language cross platform object model D-Bus open cross language cross platform Object Model KParts KDE component framework SOM IBM System Object Model, feature-rich alternative to COM XPCOM Mozilla applications cross Platform Component Object Model Enterprise JavaBeans Java Remote Method Invocation Internet Communications Engine Language binding Foreign function interface Calling convention Name mangling Application programming interface - API Application Binary Interface - ABI SWIG opensource automatic interfaces bindings generator from many languages to other languages Notes References External links Component Object Model on MSDN Interview with Tony Williams, Co-Inventor of COM (Video Webcast, August 2006) Info: Difference Between OLE Controls and ActiveX Controls from Microsoft TypeLib Data Format Specification (unofficial) with open source dumper utility. The COM / DCOM Glossary Component-based software engineering Inter-process communication Microsoft application programming interfaces Object models Object request broker Object-oriented programming
45583070
https://en.wikipedia.org/wiki/Synergy%20International%20Systems
Synergy International Systems
Synergy International Systems, Inc. ("Synergy") is an information technology and consulting company based in Washington, D.C. that provides web-based software to international development agencies, country governments, NGOs and private sector partners. The key products focused on monitoring and evaluation (M&E), national development effectiveness, and aid management, judicial system modernization, social protection, public financial management (PFM), disaster relief and reconstruction, environment, education, and public health. The company maintain a Global Learning Center in Yerevan, Armenia. Its services include software development and customization, IT strategy consulting, systems integration, capacity development and technical support. Synergy has developed management information systems for public and private sector clients in 65 countries. History The company was founded in Washington, D.C. in 1997. It is incorporated under the laws of the Commonwealth of Virginia in the United States. Synergy's first product was a PC system – Donor Assistance Database, developed in the scope of the G7 Support Implementation Group project in 1996, to monitor aid assistance donated from the international community. The PC system was developed for the Russian Federation in 1996. The system, later on, was developed for the Newly Independent States of the former Soviet Union, including Armenia, Georgia, Kazakhstan, Kyrgyzstan, Turkmenistan, Ukraine, and Tajikistan. Synergy International Systems opened its Global Learning Center in Yerevan, Armenian in 1999. Synergy currently employs around 200 highly professional staff in the areas of software development, systems analysis, systems integration, network administration, quality assurance, and business analysts. Clients The overall clients include international government ministries and development partners, with a geographical coverage that includes countries in Africa, Asia-Pacific, Australia, the Caribbean, Central America, Europe, and the Middle East. Synergy's clients also include such international organizations as the Asian Development Bank (ADB); the Global Fund to Fight AIDS, Tuberculosis and Malaria; German Development Cooperation (GIZ), the Inter-American Development Bank (IDB); the Millennium Challenge Corporation; the Organisation of Eastern Caribbean States (OECS); the United Nations Development Programme (UNDP); the United Nations Office for Project Services (UNOPS); the United States Agency for International Development (USAID); the United States Department of State (DOS); and the World Bank. Since 2005 Synergy has been working with The International Aid Transparency Initiative (IATI) as a member of the IATI Technical Advisory Group (TAG) aligning its aid information management solutions with the international protocols and principles of aid transparency. Products Synergy Indicata: M&E Software - An online monitoring and evaluation tool that captures performance and result data to measure the efficiency effectiveness, impact and sustainability of the implemented projects/programs. The software provides data management, performance monitoring, and indicator tracking capabilities to improve organization's monitoring and evaluation (M&E) processes and measure development results. Case Management System (CMS) - A web-based tool for justice sector institutions that allows to move from paper-based court case filing to the automation of the entire case life cycle from legal case initiation, assignment, necessary review and approvals to case processing and closure. The product is designed to better use and manage legal data, improve the business processes and increase the efficiency of the courts. Development Assistance Database (DAD) - An aid information management software for aid information collection, tracking, analysis and planning, which the company claims is used in more than 35 countries. Through DAD donor agencies in a country report on their development assistance projects, including project funding and results data designed to analyze aid flows, harmonize projects with policy priorities, and manage development funds with greater transparency and accountability. Social Protection Information System (SPIS) - A web-based software for managing social protection and social safety net programs. It is designed as an online toolset for the delivery of social benefits to poor and vulnerable people by providing beneficiary management, payment administration and cash transfers, monitoring and evaluation, feedback and complaints management. Public Investment Management (PIM) Suite - A web-based software designed to support management of public investments program (PIP) life-cycle, including submission, screening and approval of project proposals, allocation and execution of the capital budget. Post-Disaster Management Suite - A web-based software platform for planning, tracking and coordinating post-disaster recovery and reconstruction activities. It captures critical information such as affected locations and populations, emergency response efforts, estimates of damages, losses and needs and then matches this information with disaster recovery projects. State Budgeting System (SBS) - A public finance software solution which assists country governments to collect, analyze, and report key data for budget planning and execution. It automates the budget preparation and planning processes, tracks revisions and negotiations between the Ministry of Finance and line Ministries, analyzes budget data and monitors the budget execution. Education management information system (EMIS) - an education data collection, analysis and reporting toolset targeted at the needs of education ministries, educational institutions, and policy-makers. Technology The technological foundation of Synergy's products is the proprietary technology platform - Intelligent Data Manager (IDM™). IDM is Commercial Off-the-Shelf (COTS) software technology platform that enables the creation of database-driven business applications. The core of IDM's metadata-driven architecture – knowledgebase, is a metadata repository that stores the logical representation of application features and functionality. This knowledgebase and the resulting application data are animated by a modular suite of capabilities for data management, visualization, reporting, and business process automation and user management. The business requirements of a client are being tailored to the knowledgebase to create a wide array of data-driven business applications. Recognition and awards Gartner included Synergy International Systems as a representative vendor in its 2015 Market Guide for Enterprise Program and Portfolio Management (EPPM) Software. In 2014 Synergy International Systems was recognized as 20 most promising project management solutions. Synergy was also awarded the Innovative Government Technology Award in the Information Management category at the 2008 FutureGov Summit for the development of the Recovery Aceh Nias (RAN) Database installed for the BRR – Reconstruction and Rehabilitation Agency of the Government of Indonesia. See also Development Assistance Database (DAD) References External links Official website Information technology companies of the United States
58423359
https://en.wikipedia.org/wiki/Live2D
Live2D
Live2D is the technique of generating 2D animations, usually anime-style characters, using layered, continuous parts based on an illustration, without the need of frame-by-frame animation or a 3D model. This enables characters to move while maintaining the original illustration at low-cost. It can be considered as the balance of cost and effect of an animation.Live2D is also the name of the eponymous animation software series employing the technique, created by Japanese programmer Tetsuya Nakajo. Live2D characters consist of layered parts. Parts are separately moved to show the whole animation and expression of the character, such as tilting head. Parts can be as simple as face, hair, and body, or it can be detailed to eyebrows, eyelashes, and even different parts of hair which you wish to have different movements. The number of layers depends on how you wish the Live2D character present movements. The layers are rigged to a skeleton to form a whole animated character. Live2D can be used with motion capture to track movements and perform lip syncing for real-time applications such as vtubing. The downside of the technology is that currently there is no official setting for 360° rotation. It is also difficult to do large angle turns for complex images or characters. Live2D has been used in a wide variety of video games, visual novels, virtual YouTuber channels, and other media. Well-known examples of Live2D media and software include FaceRig, Nekopara, Azur Lane, and virtual YouTubers (as popularized by and Hololive). History Live2D was first introduced in 2008 for the need of interactive media. Since then, the technology has also changed how games enhance user experience through lively characters and expressions. In 2009, Cubism (now Live2D) released their very first Live2D application, Live2D vector. The application transforms vector graphic to make flat character image achieve three-dimensional head turning and moving effects. Although such character can only perform limited activities, it performs much better than static pictures or slideshows. User can also customize their own moving character by adjusting parameters through software or collecting materials such as images of different angles of a character. Of course, vector graphics still have many limitations. Although the occupied capacity resources are reduced, the rendering of complex images consumes a lot of CPU. Another disadvantage is that it cannot present certain styles of paintings, such as oil painting and gouache styles. The first application of Live2D technique is HibikiDokei released by sandwichproject (株式会社レジストプランニング), an alarm app released in 2010. The alarm app has a girl character named "hibiki" that talks and moves. In 2011, Live2D adopted PSP game Ore no Imōto ga Konna ni Kawaii Wake ga Nai Portable released by NAMCO BANDAI Games Inc became the first game the O.I.U system derived from Live2D technology was applied in a game, where the character moves and changes positions and expression while talking to the player. Characters moved expressively on the screen and seamlessly like an anime, which surprised players and triggered the popularity of Live2D. Software Live2D Ltd. Software developer Tetsuya Nakashiro had been independently developing Live2D software, and founded the company Cyber Noise (or Cybernoids, Japanese:てサイバーノイズ) in 2006 with support from the Exploratory IT Human Resources Project of the Japanese Information Technology Promotion Agency (IPA). Because of its novelty and lack of uptake, Cyber Noise was unsuccessful. In 2011, Live2D software received attention after its use in the PSP game Ore no Imōto ga Konna ni Kawaii Wake ga Nai Portable. It subsequently received interest as a library for Android and iOS. Following this success, in 2014 Cyber Noise subsequently renamed itself to Live2D Ltd,, unifying with its product name. Sales of Live2D has significant growth since then. In 2021, 70% of Live2D Cubism Pro users is Vtuber, followed by games / apps (videos) and animation / video works. Live2D Ltd. provides its software and SDKs under both commercial licenses and as freeware. Live2D Ltd. Software Live2D Cubism Live2D Euclid (released in April 2017, no longer available from October 16, 2018) Official Marketplace Nizima ("Secondary Ma" before 2019) is the Live2D official market by Live2D Ltd. The market serves as a platform where users can buy and sell illustrations, Live2D data, or make-to order transactions. Illustrators and Live2D creators are able to work together on a character and share sales on the platform. The platform also provide Live2D preview for users to see and move the model before purchasing. Third-party Avatar software Some software programs are able to create animated avatars by combining the Live2D system with real-time motion capture and computer-generated imagery, including: Works using Live2D Visual novels Mobile games Console games PC games See also Avatar (computing) Computer animation Computer-generated imagery Uncanny valley Virtual actor Virtual idol Virtual influencer Virtual YouTuber References External links Live2D homepage Animation software Japanese inventions
32564822
https://en.wikipedia.org/wiki/Virsto
Virsto
Virsto develops a VM-centric storage hypervisor. The company is privately funded and headquartered in Sunnyvale, California. On February 11, 2013, VMware announced that it had signed a definitive agreement to acquire Virsto. History Virsto was founded in 2007 by Mark Davis, Alex Miroshnichenko and Serge Pashenkov. In 2009, the company announced a $7 million series A funding round. In 2011, the company announced a $17 million series B funding round. See also Storage hypervisor Storage virtualization Software defined storage References Further reading CRN, June 21, 2010. 2010 Storage Superstars: 25 You Need To Know. Virtual Strategy Magazine (podcast), February 16, 2010. Optimizing Storage Management. Dan Kusnetzky, ZDNet, April 21, 2011. Virtual Storage for Virtual Environments. George Crump, Storage Switzerland, April 22, 2011. Bridging the VDI Storage ROI Gap. Mike Vizard, CTOEdge, September 17, 2010. Virtualization Makes Managing Storage Harder. Mark Cox, eChannelLine, December 1, 2010. VDI coalition formed to establish best practices. Jacob Nthoiwa, ITWeb, September 28, 2010. Virtualisation Top of SME IT Spend. Mike Vizard, IT Business Edge, October 13, 2011. Storage Provisioning Seeking Parity in Virtual Environments. Kenneth Corbin, ServerWatch, October 19, 2011. Virsto Angling for Storage Efficiencies in Server Desktop Virtual Environments. Cloud Computing Journal, November 17, 2011. Storage Can Be the Key to Aggressively Priced Cloud Offerings. eChannelLine, April 18, 2012. Virsto Ships Virsto2.5 for Windows Server Hyper-v. Virtual-Strategy Magazine, April 6, 2012. Infographic: What's it Going to Take for VDI to Take Off?. InformationWeek, May 9, 2012. Virsto Storage Hypervisor Now Supports Citrix XenDesktop. Silicon Angle, May 9, 2012. Exploring the Storage Hypervisor. SearchVirtualStorage, May 11, 2012. Virsto Software Adds Support for XenDesktop in VDI Environments. All Things Digital, November 21, 2012. The Storage Games. Silicon Angle, November 27, 2012. Software-Defined Storage is the Missing Link in the Software-Defined Data Center. Enterprise Systems Journal, December 3, 2012. Virsto for vSphere 2.0 Delivers Software-Defined Storage to the Data Center. ZDnet, January 18, 2013. Is Storage Virtualization the Missing Link in vSphere Environments?. External links Virsto Software companies of the United States Computer storage companies Companies based in Sunnyvale, California
32982054
https://en.wikipedia.org/wiki/Packard%20Bell%20Corporation
Packard Bell Corporation
Packard Bell Corporation (also known as Packard Bell Electronics or simply Packard Bell) was an American electronics manufacturer founded in 1933 by Herb Bell and Leon Packard. Initially they produced radios but eventually expanded into defense electronics during World War II. After the war ended, they began manufacturing other consumer electronics, including television sets. In 1957, the company became involved in the manufacture of scientific and military computers. American industrial conglomerate Teledyne Technologies acquired the business in 1968. In 1986, Israeli investors bought the name for a newly formed personal computer manufacturer, Packard Bell. History Herb Bell had been in radio electronics since 1926, when he had formed a partnership with an engineer named Neff. Bell wanted to expand from manufacturing in a one-car garage to a larger facility to increase production, but Neff disagreed and the partnership ended. In 1928, Bell formed a new venture with Edward Jackson called Jackson Bell Radio Company. Jackson left when Bell suggested producing higher priced radios that came with greater financial risk. Jackson Bell experienced financial problems during the Great Depression, but the business encountered greater difficulty in 1931 when RCA released the superheterodyne radio. Jackson Bell was licensed by RCA but struggled due to a large inventory of obsolete radios, and was liquidated. Bell then formed Packard Bell with Leon Packard, whose uncle provided the capital. In 1934, Packard Bell marketed their first radio, the Model 35A, originally developed by Jackson Bell. Herb Bell managed to obtain the RCA license for manufacturing superheterodyne radios. Based in Los Angeles, Packard Bell, along with Hoffman Radio, became well-known regional makers of consumer electronics. Leon Packard was unhappy with the direction Packard Bell Company was going and asked Herb Bell to buy his share out in 1935. Packard Bell was a family business. Bell and his four brothers Arthur, Albert, Elmer, and Willard participated in design, manufacturing and marketing. The company incorporated as Packard Bell Corporation in 1946. Packard Bell was a very profitable company during World War II, producing defense electronics, and this continued into the 1960s. One of Packard Bell's products during the Second World War was an identification, friend or foe transponder unit (designated AN/APX-92) used for aircraft. In 1955, the company went public. Packard-Bell Electronics Corp. was adopted in 1956 as the new name for the public company. The company formed a subsidiary to produce computers in 1957, which was later acquired by Raytheon in 1964. In 1968, Packard Bell was sold to Teledyne. Teledyne was interested in expanding into consumer electronics. Teledyne converted Packard Bell common stock into Teledyne common stock at a ratio of one share of Teledyne common stock for each seven and one-half shares of Packard Bell common stock. The Packard Bell name remained but with Teledyne as a prefix (as with other Teledyne operating divisions), and was renamed "Teledyne Packard Bell". Packard Bell ceased marketing television sets in 1974, joining a number of old-line manufacturers who were exiting the business, merging with stronger competitors or outsourcing manufacturing to foreign entities. There is very little information on Leon S. Packard. The growth of Packard Bell was mainly from Herb and his brothers. Born Herbert Anthony Zwiebel in 1890, his parents were Anthony (Anton) Zwiebel Jr and Anna Mary Brehm. Like many descendants of immigrants, there was the desire to acquire 'American-sounding' names, so Herb changed his last name to Bell. In his leisure time, Herb Bell was an avid boater and owned several pleasure crafts and yachts. He had a 98-foot yacht that he later donated to the Scripps Institution of Oceanography. He had them docked at Newport Harbor (Newport Beach, California) and often sailed from Newport Harbor to Catalina Island. He named his boats the Five Bells referring to him and his brothers. Radios Packard Bell radios had a unique styling that makes them easy to identify. Being a US West Coast radio maker, they maintained that image by "stationizing" their radio dials. Major US West Coast and Canadian (west of the Rocky Mountain range) radio station call letters were printed on the tuning dial. From 1926 through 1950, the marketing area for Packard Bell radios consisted of Arizona, California, Idaho, Nevada, Oregon and Washington. Many Packard Bell models made during this period have stationized dials with the call letters of the major stations from these states marked on the dial. These "stationized" dials also include KSL 1160 in Salt Lake City and KOA 850 in Denver. This was an idea inspired by Herb's mother who had trouble reading the tuning dial. After 1950, Packard Bell discontinued its "stationized" dials when it began selling radios and televisions throughout North America. A Packard Bell radio was used as a prop in the 1960s American television series Gilligan's Island. The Japanese made, eight-transistor AR-851 was an important plot device over the course of the three-year run of the show. Other business In 1957, Packard Bell Electronics formed a unit called Packard Bell Computer Corporation, which was led by Max Palevsky. This subsidiary developed and produced computers for specialized uses, such as industrial process control, military, and scientific applications. An example of a niche application was a truck-mounted digital computer that analyzed seismic data for the oil industry. Starting in the late 1950s, Packard Bell was involved in the defense electronics business. They produced the PB 250, released in 1960, which was one of the last users of magnetostrictive delay lines for its main memory. It was also the last machine to be partially based on the original designs of Alan Turing’s NPL ACE computer. The company also produced a digital differential analyzer (DDA) called Transistorized Realtime Incremental Computer Expandable (TRICE) which could optionally be combined with the PB 250. Together the two machines formed a hybrid general purpose/DDA computing system. The Hycomp 250 was a hybrid computer that combined the PB 250 with a small analog computer such as the EAI TR10 or TR48. Packard Bell's computer operations were acquired by Raytheon in 1964. Notes and references This article was split from the article Packard Bell. Consumer electronics brands Electronics companies established in 1933 History of radio 1933 establishments in California Electronics companies disestablished in 1968 1968 disestablishments in California 1926 in radio
57287404
https://en.wikipedia.org/wiki/Tal%20Kopan
Tal Kopan
Tal Kopan (born December 19, 1986) is the Washington correspondent for the San Francisco Chronicle and previously was an American political reporter for CNN, where she focused on immigration and cybersecurity. Biography Kal Teva Kopan was born in 1987 in Chicago, Illinois, the daughter of Esther (née Shidlovsky) and Raphael Kopan. Her father was born in Petah Tikva, Israel and served as an infantry lieutenant in the Israel Defense Forces before emigrating to the United States to obtain a PhD at the University of Chicago (where he worked under Elaine Fuchs); he currently works as a professor of developmental biology at Washington University in St. Louis. Kopan was raised in metropolitan Saint Louis, Missouri where she played percussion with Daniel Wittels and Marvin McNutt and graduated with honors with an A.B. from the University of Chicago. She has one sister, Gili Kopan. During school, Kopan interned as a web producer at WFLD in Chicago and then as a freelance web producer at ABC 7 Chicago where she covered the trial of former Illinois Governor Rod Blagojevich and the election of Chicago Mayor Rahm Emanuel. She then went to work for Politico in Washington D.C. as a breaking news reporter and then cybersecurity reporter. She also worked for CNN as a political reporter where she specialized on immigration and cybersecurity. She's currently the Washington correspondent for the San Francisco Chronicle. Kopan was selected as a 2014-2015 National Press Foundation Paul Miller Fellow, was a member of the 2015 class of Journalist Law School at Loyola Law School, Los Angeles, and was a recipient of the National Academy of Television Arts and Sciences, Midwest Chapter's 2008 Ephraim Family Scholarship. Personal life She is married to her high school sweetheart, Bryan McMahon. References External links University of Chicago Alumni Career Programs: Keynote - Journalism and Media An Inside Scoop by Tal Kopan March 20, 2018 1986 births CNN people Jewish American journalists University of Chicago alumni Living people 21st-century American Jews
40852882
https://en.wikipedia.org/wiki/Indian%20Computer%20Emergency%20Response%20Team
Indian Computer Emergency Response Team
The Indian Computer Emergency Response Team (CERT-IN or ICERT) is an office within the Ministry of Electronics and Information Technology of the Government of India. It is the nodal agency to deal with cyber security threats like hacking and phishing. It strengthens security-related defence of the Indian Internet domain. Background CERT-IN was formed in 2004 by the Government of India under Information Technology Act, 2000 Section (70B) under the Ministry of Communications and Information Technology. CERT-IN has overlapping on responsibilities with other agencies such as National Critical Information Infrastructure Protection Centre (NCIIPC) which is under the National Technical Research Organisation (NTRO) that comes under Prime Minister's Office and National Disaster Management Authority (NDMA) which is under Ministry of Home Affairs.. Functions In December 2013, CERT-In reported there was a rise in the cyber attacks on Government organisations like banking and finance, oil and gas and emergency services. It issued a list of security guidelines to all critical departments. It liaisons with Office of National Cyber Security Coordinator, National Security Council and National Information Board in terms of the nation's cyber security and threats. As a nodal entity, India’s Computer Emergency Response Team (CERT-in) plays a crucial role under the Ministry of Electronics and Information Technology(MeitY). Agreements A memorandum of understanding (MoU) was signed in May, 2016 between Indian Computer Emergency Response Team (CERT-In) and Ministry of Cabinet Office, UK. Earlier CERT-In signed MoUs with similar organisations in about seven countries - Korea, Canada, Australia, Malaysia, Singapore, Japan and Uzbekistan. Ministry of External Affairs has also signed MoU with Cyber Security as one of the areas of cooperation with Shanghai Cooperation Organisation. With the MoUs, participating countries can exchange technical information on Cyber attacks, response to cyber security incidents and find solutions to counter the cyber attacks. They can also exchange information on prevalent cyber security policies and best practices. The MoUs helps to strengthen cyber space of signing countries, capacity building and improving relationship between them. Incidents and reports In March 2014, CERT-In reported a critical flaw in Android Jelly Bean's VPN implementation. In July 2020, has warned the Google Chrome users to immediately upgrade to the new Chrome browser version 84.0.4147.89. Multiple vulnerabilities that could allow access to hackers were reported. In April 2021, issued a "high severity" rating advisory on the vulnerability detected on WhatsApp and WhatsApp Business for Android prior to v2.21.4.18 and WhatsApp and WhatsApp Business for iOS prior to v2.21.32. References Government agencies of India Government agencies established in 2004 2004 establishments in India Cyber Security in India
52413716
https://en.wikipedia.org/wiki/Turn%20restriction%20routing
Turn restriction routing
A routing algorithm decides the path followed by a packet from the source to destination routers in a network. An important aspect to be considered while designing a routing algorithm is avoiding a deadlock. Turn restriction routing is a routing algorithm for mesh-family of topologies which avoids deadlocks by restricting the types of turns that are allowed in the algorithm while determining the route from source node to destination node in a network. Reason for deadlock A deadlock (shown in fig 1) is a situation in which no further transportation of packets can take place due to the saturation of network resources like buffers or links. The main reason for a deadlock is the cyclic acquisition of channels in the network. For example, consider there are four channels in a network. Four packets have filled up the input buffers of these four channels and needs to be forwarded to the next channel. Now assume that the output buffers of all these channels are also filled with packets that need to be transmitted to the next channel. If these four channels form a cycle, it is impossible to transmit packets any further because the output buffers and input buffers of all channels are already full. This is known as cyclic acquisition of channels and this results in a deadlock. Solution to deadlock Deadlocks can either be detected, broken or avoided from happening altogether. Detecting and breaking deadlocks in the network is expensive in terms of latency and resources. So an easy and inexpensive solution is to avoid deadlocks by choosing routing techniques that prevent cyclic acquisition of channels. Logic behind turn restriction routing Logic behind turn restriction routing derives from a key observation. A cyclic acquisition of channels can take place only if all the four possible clockwise (or anti-clockwise) turns have occurred. This means deadlocks can be avoided by prohibiting at least one of the clockwise turns and one of the anti-clockwise turns. All the clockwise and anti-clockwise turns that are possible in a non restricted routing algorithm are shown in fig 2. Examples of turn restriction routing A turn restriction routing can be obtained by prohibiting at least one of the four possible clockwise turns and at least one of the four possible anti-clockwise turns in the routing algorithm. This means there are at least 16 (4x4) possible turn restriction routing techniques as you have 4 clockwise turns and 4 anti-clockwise turns to choose from. Some of these techniques have been listed below. Dimension-ordered (X-Y) routing Dimension ordered (X-Y) routing (shown in fig 3) restricts all turns from y-dimension to x-dimension. This prohibits two anti-clockwise and two clockwise turns which is more than what is actually required. Even then since it restricts the number of turns that are allowed we can tell that this is an example for turn restriction routing. West first routing West first routing (shown in fig 4) restricts all turns to the west direction. This means west direction should be taken first if needed in the proposed route. North last routing North last routing (shown in fig 5) restricts turning to any other direction if the current direction is north. This means north direction should be taken last if needed in the proposed route. Negative first routing Negative first routing (shown in fig 6) restricts turning to a negative direction while the current direction is positive. West is considered as the negative direction in X-dimension and south is considered as the negative direction in Y-dimension. This means any hop in one of the negative directions should be taken before taking any other turn. Advantages of turn restriction routing Avoiding deadlocks is less expensive to implement than deadlock detecting and breaking techniques. Turn restrictions provide alternate minimum length paths as well as non minimum length paths from one node to another, which allows routing around congested or failed links. For example, consider figure 7 below. Say there are multiple routers, F1, F2 etc., that feed packets to a congested, but low-cost link from source router S to destination router D. Implementing Turn restriction routing means that some of the turns from any of the feeder routers to the congested router S may now be restricted. Those feeder routers may have to use a longer path to get to destination D, thereby decongesting the link from S to D to an extent. See also Policy-based routing Deadlock Heuristic algorithms References Internet architecture Routing Heuristic algorithms Concurrency (computer science) Software bugs Software anomalies
1698594
https://en.wikipedia.org/wiki/Mahadev%20Satyanarayanan
Mahadev Satyanarayanan
Mahadev Satyanarayanan (Satya) is an experimental computer scientist, an ACM and IEEE fellow, and the Carnegie Group Professor of Computer Science at Carnegie Mellon University (CMU). His is credited with many of the advances in edge computing, distributed systems, mobile computing, pervasive computing, and Internet of Things. His research focus is around performance, scalability, availability, and trust challenges in computing systems from the cloud to the mobile edge. His work on the Andrew File System (AFS) was recognized with the prestigious ACM Software System Award in 2016 and the ACM SIGOPS Hall of Fame Award in 2008 for its excellent engineering and long-lasting impact. His work on disconnected operation in Coda File System received the ACM SIGOPS Hall of Fame Award in 2015 and the inaugural ACM SIGMOBILE Test-of-Time Award in 2016. He served as the founding Program Chair of the IEEE/ACM Symposium on Edge Computing and the HotMobile workshops, the founding Editor-in-Chief of IEEE Pervasive Computing, and the founding Area Editor for the Synthesis Series on Mobile and Pervasive Computing. In addition, he was the founding director of Intel Research Pittsburgh and an advisor to the company Maginatics, which was acquired by EMC in 2014. He has a bachelor's and master's degree from Indian Institute of Technology, Madras in 1975 and 1977, and his Ph.D. in Computer Science from CMU in 1983. Andrew File System Satya was a principal architect and implementer of the Andrew File System (AFS), the technical forerunner of modern cloud-based storage systems. AFS has been continuously deployed at CMU since 1986, at a scale of many thousands of users. From its conception in 1983 as the unifying campus-wide IT infrastructure for CMU, AFS evolved through versions AFS-1, AFS-2 and AFS-3. In mid-1989, AFS-3 was commercialized by Transarc Corporation and its evolution continued outside CMU. Transarc was acquired by IBM, and AFS became an IBM product for a number of years. In 2000, IBM released the code to the open source community as OpenAFS. Since its release as OpenAFS, the system has continued to be used in many enterprises all over the world. In the academic and research lab community, OpenAFS is in use at more than 30 sites in the United States (including CMU, MIT, and Stanford) and dozens of sites in Europe, New Zealand, and South Korea. Many global companies have used OpenAFS including Morgan Stanley, Goldman Sachs, Qualcomm, IBM, United Airlines, Pfizer, Hitachi, InfoPrint, and Pictage. Over a 30-year period, AFS has been a seminal influence on academic research and commercial practice in distributed data storage systems for unstructured data. Its approach to native file system emulation, scalable file caching, access-control based security, and scalable system administration have proved to be of enduring value in enterprise-scale information sharing. The design principles that were initially discovered and validated in the creation and evolution of AFS have influenced virtually all modern commercial distributed file systems, including Microsoft DFS, Google File System, Lustre File System, Ceph, and NetApp ONTAP. In addition, AFS inspired the creation of DropBox whose founders used AFS as part of Project Athena at MIT. It also inspired the creation of Maginatics, a startup company advised by Satya that provides cloud-sourced network-attached storage for distributed environments. The NFS v4 network file system protocol standard has been extensively informed by the lessons of AFS. In 2016, AFS was honored with the prestigious ACM Software System Award. Earlier, ACM recognized the significance of AFS by inducting a key paper on it to the ACM SIGOPS Hall of Fame. The AFS papers in 1985 and 1987 also received Outstanding Paper awards at the ACM Symposium on Operating System Principles. Coda File System In 1987, Satya began work on the Coda File System to address a fundamental shortcoming of AFS-like systems. Extensive first-hand experience with the AFS deployment at CMU showed that users are severely impacted by server and network failures. This vulnerability is not just hypothetical, but indeed a fact of life in real-world deployments. Once users become critically dependent on files cached from servers, a server or network failure renders these files inaccessible and leaves clients crippled for the duration of the failure. In a large enough system, unplanned outages of servers and network segments are practically impossible to avoid. Today's enthusiastic embrace of cloud computing rekindles many of these concerns because of increased dependence on centralized resources, The goal of the Coda project was to preserve the many strengths of AFS, while reducing its vulnerability to failures. Over 30+ years, research on Coda has proved to be highly fruitful in creating new insights and mechanisms for failure-resilient, scalable and secure read-write access to shared information by mobile and static users over wireless and wired networks. Coda was the first system to show how server replication could be combined with client caching to achieve good performance and high availability. Coda invented the concept of "disconnected operation", in which cached state on clients is used to mask network and server failures. Coda also demonstrated bandwidth-adaptive weakly-connected operation over networks with low bandwidth, high latency or frequent failures. Coda's use of optimistic replication, trading consistency for availability, was controversial when introduced. Today, it is a standard practice in all data storage systems for mobile environments. Coda also pioneered the concept of translucent caching, which balances the full transparency of classic caching with the user visibility needed to achieve a good user experience on bandwidth-challenged networks. The Coda concepts of hoarding, reintegration and application-specific conflict resolution are found in the cloud sync capabilities of virtually all mobile devices today. Key ideas from Coda were incorporated by Microsoft into the IntelliMirror component of Windows 2000 and the Cached Exchange Mode of Outlook 2003. Papers relating to Coda received Outstanding Paper awards at the 1991 and 1993 ACM Symposium on Operating System Principles. In 1999, Coda received the LinuxWorld Editor's Choice Award. A 2002 narrative retrospective, "The Evolution of Coda" traces its evolution and the lessons learned from it. Later, Coda's long-lasting impact was recognized with the ACM SIGOPS Hall of Fame Award in 2015 and the inaugural ACM SIGMOBILE Test-of-Time Award in 2016. Odyssey: Application-aware Adaptation for Mobile Applications In the mid-1990s, Satya initiated the Odyssey project to explore how operating systems should be extended to support future mobile applications. While Coda supported mobility in an application-transparent manner, Odyssey explored the space of application-aware approaches to mobility. Wireless network bandwidth and energy (i.e., battery life) were two of the key resource challenges faced by mobile applications. Odyssey invented the concept of application-aware adaptation and showed how the system call interface to the Unix operating system could be extended to support this new class of mobile applications such as video delivery and speech recognition. Odyssey envisioned a collaborative partnership between the operating system and individual applications. In this partnership, the operating system monitors, controls and allocates scarce resources such as wireless network bandwidth and energy, while the individual applications negotiate with the operating system on their resource requirements and modify application behavior to offer the best user experience achievable under current resource conditions. The 1997 and 1999 Odyssey papers on application-aware adaptation and energy-aware adaptation in the ACM Symposium on Operating System Principles have proved to be highly influential. The concepts of multi-fidelity algorithms and predictive resource management that emerged from this work have also proved to be influential. Aura: Cloud Offload for IoT In the late 1990s, Satya initiated the Aura Project in collaboration with CMU faculty colleagues David Garlan, Raj Reddy, Peter Steenkiste, Dan Siewiorek and Asim Smailagic. The challenge addressed by this effort was to reduce human distraction in mobile and pervasive computing environments, recognizing that human attention does not benefit from Moore's Law, while computing resources do. This leads directly to the notion of invisible computing, which parallels Mark Weiser's characterization of an ideal technology as one that disappears. The Aura vision proved to be an excellent driver of research in mobile and pervasive computing in areas such as cyber foraging, location-aware computing, energy-awareness, and task-level adaptation. In particular, the 1997 paper "Agile Application-Aware Adaptation for Mobile Computing" pioneered "cloud offload," in which mobile devices transmit processed sensor data to a cloud service for further analysis over a wireless network. A modern incarnation of this idea is speech recognition using Siri. Specifically, a user's speech is captured by a microphone, pre-processed, and then sent to a cloud service that converts speech to text. Satya continues to do IoT-related research. He retrospectively described the evolutionary path from his early work to today's cloud-based mobile and IoT systems in "A Brief History of Cloud Offload: A Personal Journey from Odyssey Through Cyber Foraging to Cloudlets.". Reflecting on the Aura vision and IoT implementation experience to date, Satya wrote an invited paper in 2001 entitled "Pervasive Computing: Vision and Challenges." This has proved to be his most widely cited work according to Google Scholar, and continues to receive well over 100 citations each year. The concepts discussed in this paper have directly inspired today's popular vision of an "Internet of Things (IoT)." In 2018, this visionary paper was recognized by the ACM SIGMOBILE Test-of-Time Award. Internet Suspend/Resume (ISR): Virtual Desktop Building on Intel's newly available VT virtual machine (VM) technology in 2001, ISR represents an AFS-like capability for cloud-sourced VMs. Instead of just delivering files, ISR enables entire computing environments (including the operating system and all applications) to be delivered from the cloud with perfect fidelity through on-demand caching to the edges of the Internet. The June 2002 paper introducing the ISR concept was the first to articulate the concept of wide-area hands-free mobile computing with a "zero-pound laptop." The ISR concept has proved to be highly influential in the mobile computing research community, spawning related research efforts in industry and academia. A series of implementations (ISR-1, ISR-2, ISR-3, and OpenISR) and associated deployments of ISR at CMU have investigated the implementation trade-offs in this space and demonstrated the real-world viability of this technology. The ISR project inspired commercial software such as Citrix XenDesktop and Microsoft Remote Desktop Services, commonly known as Virtual Desktop Infrastructure (VDI). The VDI industry has since become a billion-dollar industry. Olive: Execution Fidelity for Software Archiving The work on ISR inspired the Olive project, a collaboration between the computer science and digital library communities. One of the major challenges of digital archiving is the ability to preserve and accurately reproduce executable content across time periods of many decades (and eventually centuries). This problem also has analogs in industry. For example, a NASA space probe to the edge of the solar system may take 30 years to reach its destination; software maintenance over such an extended period requires precise re-creation of the probe's onboard software environment. By encapsulating the entire software environment in a VM (including, optionally, a software emulator for now-obsolete hardware), Olive preserves and dynamically reproduces the precise execution behavior of software. The Olive prototype demonstrated reliable archiving of software dating back to the early 1980s. The concept of execution fidelity, introduced by Olive, has proved to be highly influential in digital archiving. Diamond: Unindexed Search for High-dimensional Data The Diamond project explored interactive search of complex data such as photographs, video, and medical images that have not been tagged or indexed a priori. For such unstructured and high-dimensional data, the classical approach of full-text indexing is not viable: in contrast to text, which is human-authored and one-dimensional, raw image data requires a feature extraction step prior to indexing. Unfortunately, the features to extract for a given search are not known a priori. Only through interactive trial and error, with partial results to guide his progress, can a user converge on the best choice of features for a specific search. To support this search workflow, the OpenDiamond platform provided a storage architecture for discard-based search that pipelines user control, feature extraction, and per-object indexing computation and result caching. As documented in a 2010 paper, the I/O workloads generated by Diamond searches differ significantly from well-understood indexing workloads such as Hadoop, with important implications for storage subsystems. The unique search capabilities of Diamond attracted significant interest in the medical and pharmaceutical research communities. Researchers in these communities collaborated in creating Diamond-based applications for domains such as radiology (breast cancer screening), pathology and dermatology (melanoma diagnosis), drug discovery (anomaly detection), and craniofacial genetics (cleft lip syndrome genetic screening). The work on Diamond and associated software spurred extensive collaboration between Satya's research group at CMU and Health Sciences at the University of Pittsburgh. The collaboration with pathologists led to the design and implementation of OpenSlide a vendor‐neutral open source library for digital pathology. OpenSlide is in use today by many academic and industrial organizations worldwide, including many research sites in the United States that are funded by the National Institutes of Health and companies such as HistoWiz. Elijah: Edge Computing Satya pioneered edge computing with the 2009 publication of the paper "The Case for VM-based Cloudlets in Mobile Computing," and his ensuing research efforts in Project Elijah. This paper is now widely recognized as the founding manifesto of edge computing, and has proved to be highly influential in shaping thoughts and actions. It was written in close collaboration with Victor Bahl from Microsoft, Roy Want from Intel (now at Google), Ramon Caceres from AT&T (also at Google now), and Nigel Davies from Lancaster University. This paper introduced the concept of cloudlets, which are small data-centers located at the network edge. As a new computing tier between mobile devices and the cloud, they have powerful computational resources and excellent connectivity to mobile devices, typically just one wireless hop away. Their low latency and high bandwidth to mobile users and sensors make them ideal locations for offloading computation. A detailed account of the origin of the paper and the cloudlet concept is described in the 2014 retrospective, "A Brief History of Cloud Offload: A Personal Journey from Odyssey Through Cyber Foraging to Cloudlets." Edge computing has now become one of the hottest topics in industry and academia. It is particularly relevant to mobile and IoT use cases in which a significant amount of live sensor data needs to be intensively processed in real-time. Many applications in domains such as VR/AR, factory automation, and autonomous vehicles exhibit such workflow. For example, high-quality commercial VR headsets, such as Oculus Rift and HTC Vive, require tethering to a GPU-equipped desktop. Such tethering negatively impacts user experience. On the other hand, untethered devices sacrifice the quality of the virtual environment. Fundamentally, these latency-sensitive, resource-hungry, and bandwidth-intensive applications cannot run on mobile devices alone due to insufficient compute power, nor can they run in the cloud due to long network latency. Only edge computing can break this deadlock. The concepts of VM synthesis and VM handoff were conceived and demonstrated in Elijah, leading to the OpenStack++ reference implementation of cloudlet software infrastructure. The Open Edge Computing Initiative is a collection of companies working closely with CMU to build an open ecosystem for edge computing. Gabriel: Wearable Cognitive Assistance In 2004, Satya wrote the thought piece "Augmenting Cognition" that imagined a world in which human receive useful real-time guidance on everyday tasks from wearable devices whose capabilities are amplified by nearby compute servers. A decade later, with the emergence of edge computing and the commercial availability of wearable devices such as Google Glass and Microsoft Hololens, the prerequisites to realize this vision were at hand. Satya initiated Project Gabriel to explore this new genre of applications, which combine the look and feel of augmented reality (AR) with algorithms associated with artificial intelligence (AI). The 2014 paper "Towards Wearable Cognitive Assistance" describes the Gabriel platform for such application. Many applications (such as one to assemble an IKEA Table Lamp) have been built on the Gabriel platform, and videos of them are available here. In these applications, a user wears a head-mounted smart glasses that continuously captures actions and surroundings from a first-person viewpoint. In real-time, the video stream is transmitted to a cloudlet and analyzed to identify the state of the assembly. Audiovisual instructions are then generated to demonstrate a subsequent procedure or to alert and correct a mistake. In 2016, CBS 60 minutes covered the Gabriel project in a special edition on Artificial Intelligence. References External links Mahadev Satyanarayanan's Home Page Coda/Odyssey Aura Internet Suspend/Resume Elijah and Gabriel projects Carnegie Mellon University alumni Carnegie Mellon University faculty Fellows of the Association for Computing Machinery Fellow Members of the IEEE Living people 1953 births
9022539
https://en.wikipedia.org/wiki/Ambient%20authority
Ambient authority
Ambient authority is a term used in the study of access control systems. A subject, such as a computer program, is said to be using ambient authority if it only needs to specify the names of the involved object(s) and the operation to be performed on them in order for a permitted action to succeed. In this definition, a "name" is any way of referring to an object that does not itself include authorising information, and could potentially be used by any subject; an action is "permitted" for a subject if there exists any request that that subject could make that would cause the action to be carried out. The authority is "ambient" in the sense that it exists in a broadly visible environment (often, but not necessarily a global environment) where any subject can request it by name. For example, suppose a C program opens a file for read access by executing the call: open("filename", O_RDONLY, 0) The desired file is designated by its name on the filesystem, which does not by itself include authorising information, so the program is exercising ambient authority. When ambient authority is requested, permissions are granted or denied based on one or more global properties of the executing program, such as its identity or its role. In such cases, the management of access control is handled separately from explicit communication to the executing program or process, through means such as access control lists associated with objects or through Role-Based Access Control mechanisms. The executing program has no means to reify the permissions that it was granted for a specific purpose as first-class values. So, if the program should be able to access an object when acting on its own behalf but not when acting on behalf of one of its clients (or, on behalf of one client but not another), it has no way to express that intention. This inevitably leads to such programs being subject to the confused deputy problem. The term "ambient authority" is used primarily to contrast with capability-based security (including object-capability models), in which executing programs receive permissions as they might receive data, as communicated first-class object references. This allows them to determine where the permissions came from, and thus avoid the Confused deputy problem. However, since there are additional requirements for a system to be considered a capability system besides avoiding ambient authority, "non-ambient authority system" is not just a synonym for "capability system". Ambient authority is the dominant form of access control in computer systems today. The user model of access control as used in Unix and in Windows systems is an ambient authority model because programs execute with the authorities of the user that started them. This not only means that executing programs are inevitably given more permissions (see Principle of least privilege) than they need for their task, but that they are unable to determine the source or the number and types of permission that they have. A program executing under an ambient authority access control model has little option but to designate permissions and try to exercise them, hoping for the best. This property requires an excess of permissions to be granted to users or roles, in order for programs to execute without error. References Access control
1589136
https://en.wikipedia.org/wiki/Italian%20literature
Italian literature
Italian literature is written in the Italian language, particularly within Italy. It may also refer to literature written by Italians or in Italy in other languages spoken in Italy, often languages that are closely related to modern Italian. Italian literature begins in the 12th century when in different regions of the peninsula the Italian vernacular started to be used in a literary manner. The Ritmo laurenziano is the first extant document of Italian literature. An early example of Italian literature is the tradition of vernacular lyric poetry performed in Occitan, which reached Italy by the end of the 12th century. In 1230, the Sicilian School is notable for being the first style in standard Italian. Dante Alighieri, one of the greatest of Italian poets, is notable for his Divine Comedy. Petrarch did classical research and wrote lyric poetry. Renaissance humanism developed during the 14th and the beginning of the 15th centuries. Humanists sought to create a citizenry able to speak and write with eloquence and clarity. Early humanists, such as Petrarch, were great collectors of antique manuscripts. Lorenzo de Medici shows the influence of Florence on the Renaissance. Leonardo da Vinci wrote a treatise on painting. The development of the drama in the 15th century was very great. The fundamental characteristic of the era following Renaissance is that it perfected the Italian character of its language. Niccolò Machiavelli and Francesco Guicciardini were the chief originators of the science of history. Pietro Bembo was an influential figure in the development of the Italian language and an influence on the 16th-century revival of interest in the works of Petrarch. In 1690 the Academy of Arcadia was instituted with the goal of "restoring" literature by imitating the simplicity of the ancient shepherds with sonnets, madrigals, canzonette and blank verse. In the 17th century, some strong and independent thinkers, such as Bernardino Telesio, Lucilio Vanini, Bruno and Campanella turned philosophical inquiry into fresh channels, and opened the way for the scientific conquests of Galileo Galilei, who is notable both for his scientific discoveries and his writing. In the 18th century, the political condition of Italy began to improve, and philosophers throughout Europe in the period known as The Enlightenment. Apostolo Zeno and Metastasio are two of the notable figures of the age. Carlo Goldoni, a Venetian, created the comedy of character. The leading figure of the literary revival of the 18th century was Giuseppe Parini. The ideas behind the French Revolution of 1789 gave a special direction to Italian literature in the second half of the 18th century. Love of liberty and desire for equality created a literature aimed at national object. Patriotism and classicism were the two principles that inspired the literature that began with Vittorio Alfieri. Other patriots included Vincenzo Monti and Ugo Foscolo. The romantic school had as its organ the Conciliatore established in 1818 at Milan. The main instigator of the reform was Alessandro Manzoni. The great poet of the age was Giacomo Leopardi. History returned to its spirit of learned research. The literary movement that preceded and was contemporary with the political revolution of 1848 may be said to be represented by four writers - Giuseppe Giusti, Francesco Domenico Guerrazzi, Vincenzo Gioberti and Cesare Balbo. After the Risorgimento, political literature becomes less important. The first part of this period is characterized by two divergent trends of literature that both opposed Romanticism, the Scapigliatura and Verismo. Important early-20th-century writers include Italo Svevo and Luigi Pirandello (winner of the 1934 Nobel Prize in Literature). Neorealism was developed by Alberto Moravia. Umberto Eco became internationally successful with the Medieval detective story Il nome della rosa (The Name of the Rose, 1980). The Nobel Prize in Literature has been awarded to Italian language authors six times (as of 2019) with winners including Giosuè Carducci, Grazia Deledda, Luigi Pirandello, Salvatore Quasimodo, Eugenio Montale and Dario Fo. Early medieval Latin literature As the Western Roman Empire declined, the Latin tradition was kept alive by writers such as Cassiodorus, Boethius, and Symmachus. The liberal arts flourished at Ravenna under Theodoric, and the Gothic kings surrounded themselves with masters of rhetoric and of grammar. Some lay schools remained in Italy, and noted scholars included Magnus Felix Ennodius, Arator, Venantius Fortunatus, Felix the Grammarian, Peter of Pisa, Paulinus of Aquileia, and many others. Italians who were interested in theology gravitated towards Paris. Those who remained were typically attracted by the study of Roman law. This furthered the later establishment of the medieval universities of Bologna, Padua, Vicenza, Naples, Salerno, Modena and Parma. These helped to spread culture, and prepared the ground in which the new vernacular literature developed. Classical traditions did not disappear, and affection for the memory of Rome, a preoccupation with politics, and a preference for practice over theory combined to influence the development of Italian literature. High medieval literature Trovatori The earliest vernacular literary tradition in Italy was in Occitan, a language spoken in parts of northwest Italy. A tradition of vernacular lyric poetry arose in Poitou in the early 12th century and spread south and east, eventually reaching Italy by the end of the 12th century. The first troubadours (trovatori in Italian), as these Occitan lyric poets were called, to practise in Italy were from elsewhere, but the high aristocracy of Lombardy was ready to patronise them. It was not long before native Italians adopted Occitan as a vehicle for poetic expression, though the term Occitan did not really appear until the year 1300, "langue d'oc" or "provenzale" being the preferred expressions. Among the early patrons of foreign troubadours were especially the House of Este, the Da Romano, House of Savoy, and the Malaspina. Azzo VI of Este entertained the troubadours Aimeric de Belenoi, Aimeric de Peguilhan, Albertet de Sestaro, and Peire Raimon de Tolosa from Occitania and Rambertino Buvalelli from Bologna, one of the earliest Italian troubadours. The influence of these poets on the native Italians got the attention of Aimeric de Peguilhan in 1220. Then at the Malaspina court, he penned a poem attacking a quintet of Occitan poets at the court of Manfred III of Saluzzo: Peire Guilhem de Luserna, Perceval Doria, Nicoletto da Torino, Chantarel, and Trufarel. Aimeric apparently feared the rise of native competitors. The margraves of Montferrat—Boniface I, William VI, and Boniface II—were patrons of Occitan poetry. Peire de la Mula stayed at the Montferrat court around 1200 and Raimbaut de Vaqueiras spent most of his career as court poet and close friend of Boniface I. Raimbaut, along with several other troubadours, including Elias Cairel, followed Boniface on the Fourth Crusade and established, however briefly, Italo-Occitan literature in Thessalonica. Azzo VI's daughter, Beatrice, was an object of the early poets "courtly love". Azzo's son, Azzo VII, hosted Elias Cairel and Arnaut Catalan. Rambertino was named podestà of Genoa between in 1218 and it was probably during his three-year tenure there that he introduced Occitan lyric poetry to the city, which later developed a flourishing Occitan literary culture. Among the Genoese troubadours were Lanfranc Cigala, a judge; Calega Panzan, a merchant; Jacme Grils, also a judge; and Bonifaci Calvo, a knight. Genoa was also the place of genesis of the podestà-troubadour phenomenon: men who served in several cities as podestàs on behalf of either the Guelph or Ghibelline party and who wrote political poetry in Occitan. Rambertino Buvalelli was the first podestà-troubadour and in Genoa there were the Guelphs Luca Grimaldi and Luchetto Gattilusio and the Ghibellines Perceval and Simon Doria. The Occitan tradition in Italy was more broad than simply Genoa or even Lombardy. Bertolome Zorzi was from Venice. Girardo Cavallazzi was a Ghibelline from Novara. Nicoletto da Torino was probably from Turin. In Ferrara the Duecento was represented by Ferrari Trogni. Terramagnino da Pisa, from Pisa, wrote the Doctrina de cort as a manual of courtly love. He was one of the late 13th-century figures who wrote in both Occitan and Italian. Paolo Lanfranchi da Pistoia, from Pistoia, was another. Both wrote sonnets, but while Terramagnino was a critic of the Tuscan school, Paolo has been alleged as a member. On the other hand, he has much in common with the Sicilians and the Dolce Stil Novo. Perhaps the most important aspect of the Italian troubadour phenomenon was the production of chansonniers and the composition of vidas and razos. Uc de Saint Circ, who was associated with the Da Romano and Malaspina families, spent the last forty years of his life in Italy. He undertook to author the entire razo corpus and a great many of the vidas. The most famous and influential Italian troubadour, however, was from the small town of Goito near Mantua. Sordello (1220s–1230s) has been praised by such later poets as Dante Alighieri, Robert Browning, Oscar Wilde, and Ezra Pound. He was the inventor of the hybrid genre of the sirventes- planh in 1237. The troubadours had a connexion with the rise of a school of poetry in the Kingdom of Sicily. In 1220 Obs de Biguli was present as a "singer" at the coronation of the Emperor Frederick II, already King of Sicily. Guillem Augier Novella before 1230 and Guilhem Figueira thereafter were important Occitan poets at Frederick's court. Both had fled the Albigensian Crusade, like Aimeric de Peguilhan. The Crusade had devastated Languedoc and forced many troubadours of the area, whose poetry had not always been kind to the Church hierarchy, to flee to Italy, where an Italian tradition of papal criticism was begun. Protected by the emperor and the Ghibelline faction criticism of the Church establishment flourished. Chivalric romance The Historia de excidio Trojae, attributed to Dares Phrygius, claimed to be an eyewitness account of the Trojan war. It provided inspiration for writers in other countries such as Benoît de Sainte-Maure, Herbort von Fritzlar, and Konrad von Würzburg. While Benoît wrote in French, he took his material from a Latin history. Herbort and Konrad used a French source to make an almost original work in their own language. Guido delle Colonne of Messina, one of the vernacular poets of the Sicilian school, composed the Historia destructionis Troiae. In his poetry Guido was an imitator of the Provençals, but in this book he converted Benoît's French romance into what sounded like serious Latin history. Much the same thing occurred with other great legends. Qualichino of Arezzo wrote couplets about the legend of Alexander the Great. Europe was full of the legend of King Arthur, but the Italians contented themselves with translating and abridging French romances. Jacobus de Voragine, while collecting his Golden Legend (1260), remained a historian. He seemed doubtful of the truthfulness of the stories he told. The intellectual life of Italy showed itself in an altogether special, positive, almost scientific form in the study of Roman law. Farfa, Marsicano, and other scholars translated Aristotle, the precepts of the school of Salerno, and the travels of Marco Polo, linking the classics and the Renaissance. At the same time, epic poetry was written in a mixed language, a dialect of Italian based on French: hybrid words exhibited a treatment of sounds according to the rules of both languages, had French roots with Italian endings, and were pronounced according to Italian or Latin rules. In short, the language of the epic poetry belonged to both tongues. Examples include the chansons de geste, Macaire, the Entre en Espagne written by Niccola of Padua, the Prise de Pampelune, and others. All this preceded the appearance of a purely Italian literature. The emergence of native vernacular literature The French and Occitan languages gradually gave way to the native Italian. Hybridism recurred, but it no longer predominated. In the Bovo d'Antona and the Rainaldo e Lesengrino, Venetian is clearly felt, although the language is influenced by French forms. These writings, which Graziadio Isaia Ascoli has called miste (mixed), immediately preceded the appearance of purely Italian works. There is evidence that a kind of literature already existed before the 13th century: The Ritmo cassinese, Ritmo di Sant'Alessio, Laudes creaturarum, Ritmo lucchese, Ritmo laurenziano, Ritmo bellunese are classified by Cesare Segre, et al. as "Archaic Works" (Componimenti Arcaici): "such are labeled the first literary works in the Italian vernacular, their dates ranging from the last decades of the 12th century to the early decades of the 13th" (Segre: 1997). However, as he points out, such early literature does not yet present any uniform stylistic or linguistic traits. This early development, however, was simultaneous in the whole peninsula, varying only in the subject matter of the art. In the north, the poems of Giacomino da Verona and Bonvicino da Riva were specially religious, and were intended to be recited to the people. They were written in a dialect of Milanese and Venetian; their style bore the influence of French narrative poetry. They may be considered as belonging to the "popular" kind of poetry, taking the word, however, in a broad sense. This sort of composition may have been encouraged by the old custom in the north of Italy of listening in the piazzas and on the highways to the songs of the jongleurs. The crowds were delighted with the stories of romances, the wickedness of Macaire, and the misfortunes of Blanziflor, the terrors of the Babilonia Infernale and the blessedness of the Gerusalemme celeste, and the singers of religious poetry vied with those of the chansons de geste. Sicilian School The year 1230 marked the beginning of the Sicilian School and of a literature showing more uniform traits. Its importance lies more in the language (the creation of the first standard Italian) than its subject, a love-song partly modeled on the Provençal poetry imported to the south by the Normans and the Svevs under Frederick II. This poetry differs from the French equivalent in its treatment of the woman, less erotic and more platonic, a vein further developed by Dolce Stil Novo in later 13th-century Bologna and Florence. The customary repertoire of chivalry terms is adapted to Italian phonotactics, creating new Italian vocabulary. The French suffixes -ière and -ce generated hundreds of new Italian words in -iera and -za (for example, riv-iera and costan-za). These were adopted by Dante and his contemporaries, and handed on to future generations of Italian writers. To the Sicilian school belonged Enzio, king of Sardinia, Pietro della Vigna, Inghilfredi, Guido and Odo delle Colonne, Jacopo d'Aquino, Ruggieri Apugliese, Giacomo da Lentini, Arrigo Testa, and others. Most famous is Io m'aggio posto in core (I have stated within my heart), by Giacomo da Lentini, the head of the movement, but there is also poetry written by Frederick himself. Giacomo da Lentini is also credited with inventing the sonnet, a form later perfected by Dante and Petrarch. The censorship imposed by Frederick meant that no political matter entered literary debate. In this respect, the poetry of the north, still divided into communes or city-states with relatively democratic governments, provided new ideas. These new ideas are shown in the Sirventese genre, and later, Dante's Commedia, full of invectives against contemporary political leaders and popes. Though the conventional love-song prevailed at Frederick's (and later Manfred's) court, more spontaneous poetry existed in the Contrasto attributed to Cielo d'Alcamo. This contrasto (dispute) between two lovers in the Sicilian language is not the most ancient or the only southern poem of a popular kind. It belongs without doubt to the time of the emperor Frederick II (no later than 1250), and is important as proof that there existed a popular, independent of literary, poetry. The Contrasto is probably a scholarly re-elaboration of a lost popular rhyme and is the closest to a kind of poetry that perished or was smothered by the ancient Sicilian literature. Its distinguishing point was its possession of all qualities opposite to the poetry of the rhymers of the "Sicilian School", though its style may betray a knowledge of Frederick's poetry, and there is probably a satiric intent in the mind of the anonymous poet. It is vigorous in the expression of feelings. The conceits, sometimes bold and very coarse, show that its subject matter is popular. Everything about the Contrasto is original. The poems of the Sicilian school were written in the first known standard Italian. This was elaborated by these poets under the direction of Frederick II and combines many traits typical of the Sicilian, and to a lesser, but not negligible extent, Apulian dialects and other southern dialects, with many words of Latin and French origin. Dante's styles illustre, cardinale, aulico, curiale were developed from his linguistic study of the Sicilian School, whose technical features had been imported by Guittone d'Arezzo in Tuscany, though he did introduce political issues, in his ″canzoniere". The standard changed slightly in Tuscany, because Tuscan scriveners perceived the five-vowel system used by southern Italian as a seven-vowel one. As a consequence, the texts that Italian students read in their anthology contain lines that appears to not rhyme with each other (sometimes Sic. -i > -e, -u > -o), a feature known as ″Sicilian rhyme" (rima siciliana) which was widely used later by poets such as Dante or Petrarch as a display of technical skill or as a last resort; that may account for its decrease in popularity through the 19th and early 20th century. Religious literature In the 13th century a religious movement took place in Italy, with the rise of the Dominican and Franciscan Orders. The earliest preserved sermons in an Italian language are from Jordan of Pisa, a Dominican. Francis of Assisi, mystic and reformer in the Catholic Church, the founder of the Franciscans, also wrote poetry. Though he was educated, Francis's poetry was beneath the refined poetry at the center of Frederick's court. According to legend, Francis dictated the hymn Cantico del Sole in the eighteenth year of his penance, almost rapt in ecstasy; doubts remain about its authenticity. It was the first great poetical work of Northern Italy, written in a kind of verse marked by assonance, a poetic device more widespread in Northern Europe. Other poems previously attributed to Francis are now generally recognized as lacking in authenticity. Jacopone da Todi was a poet who represented the religious feeling that had made special progress in Umbria. Jacopone was possessed by St. Francis's mysticism, but was also a satirist who mocked the corruption and hypocrisy of the Church personified by Pope Boniface VIII, persecutor of Jacopone and Dante. Jacopone's wife died after the stands at a public tournament collapsed, and the sorrow at her sudden death caused Jacopone to sell all he possessed and give it to the poor. Jacopone covered himself with rags, joined St. Francis's Third Order, took pleasure in being laughed at, and was followed by a crowd of people who mocked him and called after him Jacopone, Jacopone. He went on raving for years, subjecting himself to the severest sufferings, and giving vent to his religious intoxication in his poems. Jacopone was a mystic, who from his hermit's cell looked out into the world and specially watched the papacy, scourging with his words Pope Celestine V and Pope Boniface VIII, for which he was imprisoned. The religious movement in Umbria was followed by another literary phenomenon, the religious drama. In 1258 a hermit, Raniero Fasani, left the cavern where he had lived for many years and suddenly appeared at Perugia. Fasani represented himself as sent by God to disclose mysterious visions, and to announce to the world terrible visitations. This was a turbulent period of political faction (the Guelphs and Ghibellines), interdicts and excommunications issued by the popes, and reprisals of the imperial party. In this environment, Fasani's pronouncements stimulated the formation of the Compagnie di Disciplinanti, who, for a penance, scourged themselves until they drew blood, and sang Laudi in dialogue in their confraternities. These laudi, closely connected with the liturgy, were the first example of the drama in the vernacular tongue of Italy. They were written in the Umbrian dialect, in verses of eight syllables, and, according to the 1911 Encyclopædia Britannica, "have not any artistic value." Their development, however, was rapid. As early as the end of the 13th century the Devozioni del Giovedi e Venerdi Santo appeared, mixing liturgy and drama. Later, di un Monaco che andò al servizio di Dio ("of a monk who entered the service of God") approached the definite form the religious drama would assume in the following centuries. First Tuscan literature 13th-century Tuscany was in a unique situation. The Tuscans spoke a dialect that closely resembled Latin and afterward became, almost exclusively, the language of literature, and which was already regarded at the end of the 13th century as surpassing other dialects. ("The Tuscan tongue is better suited to the letter or literature") wrote Antonio da Tempo of Padua, born about 1275. After the fall of the Hohenstaufen at the Battle of Benevento in 1266, it was the first province of Italy. From 1266, Florence began a political reform movement that led, in 1282, to the appointment of the Priori delle Arti, and establishment of the Arti Minori. This was later copied by Siena (with the Magistrato dei Nove), by Lucca, by Pistoia, and by other Guelph cities in Tuscany with similar popular institutions. The guilds took the government into their hands, and it was a time of social and political prosperity. In Tuscany, too, popular love poetry existed. A school of imitators of the Sicilians was led by Dante da Majano, but its literary originality took another line — that of humorous and satirical poetry. The entirely democratic form of government created a style of poetry that stood strongly against the medieval mystic and chivalrous style. Devout invocation of God or of a lady came from the cloister and the castle; in the streets of the cities everything that had gone before was treated with ridicule or biting sarcasm. Folgore da San Gimignano laughs when in his sonnets he tells a party of Sienese youths the occupations of every month in the year, or when he teaches a party of Florentine lads the pleasures of every day in the week. Cenne della Chitarra laughs when he parodies Folgore's sonnets. The sonnets of Rustico di Filippo are half-fun and half-satire, as is the work of Cecco Angiolieri of Siena, the oldest humorist we know, a far-off precursor of Rabelais and Montaigne. Another kind of poetry also began in Tuscany. Guittone d'Arezzo made art quit chivalry and Provençal forms for national motives and Latin forms. He attempted political poetry, and, although his work is often obscure, he prepared the way for the Bolognese school. Bologna was the city of science, and philosophical poetry appeared there. Guido Guinizelli was the poet after the new fashion of the art. In his work the ideas of chivalry are changed and enlarged. Only those whose heart is pure can be blessed with true love, regardless of class. He refuted the traditional credo of courtly love, for which love is a subtle philosophy only a few chosen knights and princesses could grasp. Love is blind to blasons but not to a good heart when it finds one: when it succeeds it is the result of the spiritual, not physical affinity between two souls. Guinizzelli's democratic view can be better understood in the light of the greater equality and freedom enjoyed by the city-states of the center-north and the rise of a middle class eager to legitimise itself in the eyes of the old nobility, still regarded with respect and admiration but in fact dispossessed of its political power. Guinizelli's Canzoni make up the bible of Dolce Stil Novo, and one in particular, "Al cor gentil" ("To a Kind Heart") is considered the manifesto of the new movement that bloomed in Florence under Cavalcanti, Dante, and their followers. His poetry has some of the faults of the school of d'Arezzo. Nevertheless, he marks a great development in the history of Italian art, especially because of his close connection with Dante's lyric poetry. In the 13th century, there were several major allegorical poems. One of these is by Brunetto Latini, who was a close friend of Dante. His Tesoretto is a short poem, in seven-syllable verses, rhyming in couplets, in which the author is lost in a wilderness and meets a lady, who represents Nature and gives him much instruction. We see here vision, allegory, and instruction with a moral object—three elements we find again in the Divine Comedy. Francesco da Barberino, a learned lawyer who was secretary to bishops, a judge, and a notary, wrote two little allegorical poems, the Documenti d'amore and Del reggimento e dei costumi delle donne. The poems today are generally studied not as literature, but for historical context. A fourth allegorical work was the Intelligenza, which is sometimes attributed to Compagni, but is probably only a translation of French poems. In the 15th century, humanist and publisher Aldus Manutius published Tuscan poets Petrarch and Dante Alighieri (The Divine Comedy), creating the model for what became a standard for modern Italian. Development of early prose Italian prose of the 13th century was as abundant and varied as its poetry. The earliest example dates from 1231, and consists of short notices of entries and expenses by Mattasala di Spinello dei Lambertini of Siena. At this time, there was no sign of literary prose in Italian, though there was in French. Halfway through the century, a certain Aldobrando or Aldobrandino, from either Florence or Siena, wrote a book for Beatrice of Savoy, countess of Provence, called Le Régime du corps. In 1267 Martino da Canale wrote a history of Venice in the same Old French (langue d'oïl). Rusticiano of Pisa, who was for a long while at the court of Edward I of England, composed many chivalrous romances, derived from the Arthurian cycle, and subsequently wrote the Travels of Marco Polo, which may have been dictated by Polo himself. And finally Brunetto Latini wrote his Tesoro in French. Latini also wrote some works in Italian prose such as La rettorica, an adaptation from Cicero's De inventione, and translated three orations from Cicero: Pro Ligario, Pro Marcello and Pro rege Deiotaro. Another important writer was the Florentine judge Bono Giamboni, who translated Orosius's Historiae adversus paganos, Vegetius's Epitoma rei militaris, made a translation/adaptation of Cicero's De inventione mixed with the Rethorica ad Erennium, and a translation/adaptation of Innocent III's De miseria humane conditionis. He also wrote an allegorical novel called Libro de' Vizi e delle Virtudi whose earlier version (Trattato delle virtù e dei vizi) is also preserved. Andrea of Grosseto, in 1268, translated three Treaties of Albertanus of Brescia, from Latin to Tuscan dialect. After the original compositions in the langue d'oïl came translations or adaptations from the same. There are some moral narratives taken from religious legends, a romance of Julius Caesar, some short histories of ancient knights, the Tavola rotonda, translations of the Viaggi of Marco Polo, and of Latini's Tesoro. At the same time, translations from Latin of moral and ascetic works, histories, and treatises on rhetoric and oratory appeared. Some of the works previously regarded as the oldest in the Italian language have been shown to be forgeries of a much later time. The oldest prose writing is a scientific book, Composizione del mondo by Ristoro d'Arezzo, who lived about the middle of the 13th century. This work is a copious treatise on astronomy and geography. Ristoro was a careful observer of natural phenomena; many of the things he relates were the result of his personal investigations, and consequently his works are more reliable than those of other writers of the time on similar subjects. Another short treatise exists: De regimine rectoris, by Fra Paolino, a Minorite friar of Venice, who was probably bishop of Pozzuoli, and who also wrote a Latin chronicle. His treatise stands in close relation to that of Egidio Colonna, De regimine principum. It is written in Venetian. The 13th century was very rich in tales. A collection called the Cento Novelle antiche contains stories drawn from many sources, including Asian, Greek and Trojan traditions, ancient and medieval history, the legends of Brittany, Provence and Italy, the Bible, local Italian traditions, and histories of animals and old mythology. This book has a distant resemblance to the Spanish collection known as El Conde Lucanor. The peculiarity of the Italian book is that the stories are very short, and seem to be mere outlines to be filled in by the narrator as he goes along. Other prose novels were inserted by Francesco Barberino in his work Del reggimento e dei costumi delle donne, but they are of much less importance. On the whole the Italian novels of the 13th century have little originality, and are a faint reflection of the very rich legendary literature of France. Some attention should be paid to the Lettere of Fra Guittone d'Arezzo, who wrote many poems and also some letters in prose, the subjects of which are moral and religious. Guittone's love of antiquity and the traditions of Rome and its language was so strong that he tried to write Italian in a Latin style. The letters are obscure, involved and altogether barbarous. Guittone took as his special model Seneca the Younger, and hence his prose became bombastic. Guittone viewed his style as very artistic, but later scholars view it as extravagant and grotesque. Dolce Stil Novo In the year 1282 a period of new literature began, developing from the Tuscan beginnings. With the school of Lapo Gianni, Guido Cavalcanti, Cino da Pistoia and Dante Alighieri, lyric poetry became exclusively Tuscan. The whole novelty and poetic power of this school, consisted in, according to Dante, Quando Amore spira, noto, ed a quel niodo Ch'ei detta dentro, vo significando: that is, in a power of expressing the feelings of the soul in the way in which love inspires them, in an appropriate and graceful manner, fitting form to matter, and by art fusing one with the other. Love is a divine gift that redeems man in the eyes of God, and the poet's mistress is the angel sent from heaven to show the way to salvation. This a neo-platonic approach widely endorsed by Dolce Stil Novo, and although in Cavalcanti's case it can be upsetting and even destructive, it is nonetheless a metaphysical experience able to lift man onto a higher, spiritual dimension. Gianni's new style was still influenced by the Siculo-Provençal school. Cavalcanti's poems fall into two classes: those that portray the philosopher, (il sottilissimo dialettico, as Lorenzo the Magnificent called him) and those more directly the product of his poetic nature imbued with mysticism and metaphysics. To the first set belongs the famous poem Sulla natura d'amore, which in fact is a treatise on amorous metaphysics, and was annotated later in a learned way by renowned Platonic philosophers of the 15th century, such as Marsilius Ficinus and others. In other poems, Cavalcanti tends to stifle poetic imagery under a dead weight of philosophy. On the other hand, in his Ballate, he pours himself out ingenuously, but with a consciousness of his art. The greatest of these is considered to be the ballata composed by Cavalcanti when he was banished from Florence with the party of the Bianchi in 1300, and took refuge at Sarzana. The third poet among the followers of the new school was Cino da Pistoia, of the family of the Sinibuldi. His love poems are sweet, mellow and musical. The 14th century: the roots of Renaissance Dante Dante, one of the greatest of Italian poets, also shows these lyrical tendencies. In 1293 he wrote La Vita Nuova ("new life" in English, so called to indicate that his first meeting with Beatrice was the beginning of a new life), in which he idealizes love. It is a collection of poems to which Dante added narration and explication. Everything is supersensual, aerial, heavenly, and the real Beatrice is supplanted by an idealized vision of her, losing her human nature and becoming a representation of the divine. Dante is the main character of the work, and the narration purports to be autobiographical, though historical information about Dante's life proves this to be poetic license. Several of the lyrics of the La Vita Nuova deal with the theme of the new life. Not all the love poems refer to Beatrice, however—other pieces are philosophical and bridge over to the Convivio. The Divine Comedy Divina Commedia tells of the poet's travels through the three realms of the dead—Hell, Purgatory, and Paradise—accompanied by the Latin poet Virgil. An allegorical meaning hides under the literal one of this great epic. Dante, travelling through Hell, Purgatory, and Paradise, symbolizes mankind aiming at the double object of temporal and eternal happiness. The forest where the poet loses himself symbolizes the civil and religious confusion of society, deprived of its two guides, the emperor and the pope. The mountain illuminated by the sun is universal monarchy. The three beasts are the three vices and the three powers that offered the greatest obstacles to Dante's designs. Envy is Florence, light, fickle and divided by the Black Guelphs and the White Guelphs. Pride is the house of France. Avarice is the papal court. Virgil represents reason and the empire. Beatrice is the symbol of the supernatural aid mankind must have to attain the supreme end, which is God. The merit of the poem does not lie in the allegory, which still connects it with medieval literature. What is new is the individual art of the poet, the classic art transfused for the first time into a Romance form. Whether he describes nature, analyses passions, curses the vices or sings hymns to the virtues, Dante is notable for the grandeur and delicacy of his art. He took the materials for his poem from theology, philosophy, history, and mythology, but especially from his own passions, from hatred and love. Under the pen of the poet, the dead come to life again; they become men again, and speak the language of their time, of their passions. Farinata degli Uberti, Boniface VIII, Count Ugolino, Manfred, Sordello, Hugh Capet, St. Thomas Aquinas, Cacciaguida, St. Benedict, and St. Peter, are all so many objective creations; they stand before us in all the life of their characters, their feelings, and their habits. The real chastiser of the sins and rewarder of virtues is Dante himself. The personal interest he brings to bear on the historical representation of the three worlds is what most interests us and stirs us. Dante remakes history after his own passions. Thus the Divina Commedia is not only a lifelike drama of contemporary thoughts and feelings, but also a clear and spontaneous reflection of the individual feelings of the poet, from the indignation of the citizen and the exile to the faith of the believer and the ardour of the philosopher. The Divina Commedia defined the destiny of Italian literature, giving artistic lustre to all forms of literature the Middle Ages had produced. Petrarch Two facts characterize the literary life of Petrarch: classical research and the new human feeling introduced into his lyric poetry. The facts are not separate; rather, the former caused the latter. The Petrarch who unearthed the works of the great Latin writers helps us understand the Petrarch who loved a real woman, named Laura, and celebrated her in her life and after her death in poems full of studied elegance. Petrarch was the first humanist, and he was at the same time the first modern lyric poet. His career was long and tempestuous. He lived for many years at Avignon, cursing the corruption of the papal court; he travelled through nearly the whole of Europe; he corresponded with emperors and popes, and he was considered the most important writer of his time. His Canzoniere is divided into three parts: the first containing the poems written during Laura's lifetime, the second the poems written after her death, the third the Trionfi. The one and only subject of these poems is love; but the treatment is full of variety in conception, in imagery and in sentiment, derived from the most varied impressions of nature. Petrarch's lyric verse is quite different, not only from that of the Provençal troubadours and the Italian poets before him, but also from the lyrics of Dante. Petrarch is a psychological poet, who examines all his feelings and renders them with an art of exquisite sweetness. The lyrics of Petrarch are no longer transcendental like Dante's, but keep entirely within human limits. The second part of the Canzoniere is the more passionate. The Trionfi are inferior; in them Petrarch tried to imitate the Divina Commedia, but failed. The Canzoniere includes also a few political poems, one supposed to be addressed to Cola di Rienzi and several sonnets against the court of Avignon. These are remarkable for their vigour of feeling, and also for showing that, compared to Dante, Petrarch had a sense of a broader Italian consciousness. He wooed an Italy that was different from any conceived by the people of the Middle Ages. In this, he was a precursor of modern times and modern aspirations. Petrarch had no decided political idea. He exalted Cola di Rienzi, invoked the emperor Charles IV, and praised the Visconti; in fact, his politics were affected more by impressions than by principles. Above all this was his love of Italy, which in his mind was reunited with Rome, the great city of his heroes, Cicero and Scipio. Petrarca, some say, began the Renaissance humanism. Boccaccio Boccaccio had the same enthusiastic love of antiquity and the same worship for the new Italian literature as Petrarch. He was the first to put together a Latin translation of the Iliad and, in 1375, the Odyssey. His classical learning was shown in the work De genealogia deorum, in which he enumerates the gods according to genealogical trees from the various authors who wrote about the pagan divinities. The Genealogia deorum is, as A. H. Heeren said, an encyclopaedia of mythological knowledge; and it was the precursor of the humanist movement of the 15th century. Boccaccio was also the first historian of women in his De mulieribus claris, and the first to tell the story of the great unfortunates in his De casibus virorum illustrium. He continued and perfected former geographical investigations in his interesting book De montibus, silvis, fontibus, lacubus, fluminibus, stagnis, et paludibus, et de nominibus maris, for which he made use of Vibius Sequester. Of his Italian works, his lyrics do not come anywhere near to the perfection of Petrarch's. His narrative poetry is better. He did not invent the octave stanza, but was the first to use it in a work of length and artistic merit, his Teseide, the oldest Italian romantic poem. The Filostrato relates the loves of Troiolo and Griseida (Troilus and Cressida). It may be that Boccaccio knew the French poem of the Trojan war by Benoit de Sainte-More; but the interest of his poem lies in the analysis of the passion of love. The Ninfale fiesolano tells the love story of the nymph Mesola and the shepherd Africo. The Amorosa Visione, a poem in triplets, doubtless owed its origin to the Divina Commedia. The Ameto is a mixture of prose and poetry, and is the first Italian pastoral romance. The Filocopo takes the earliest place among prose romances. In it Boccaccio tells the loves of Florio and Biancafiore. Probably for this work he drew materials from a popular source or from a Byzantine romance, which Leonzio Pilato may have mentioned to him. In the Filocopo, there is a remarkable exuberance in the mythological part, which damages the romance as an artistic work, but contributes to the history of Boccaccio's mind. The Fiammetta is another romance, about the loves of Boccaccio and Maria d'Aquino, a supposed natural daughter of King Robert, whom he always called by this name of Fiammetta. Boccaccio became famous principally for the Italian work, Decamerone, a collection of a hundred novels, related by a party of men and women who retired to a villa near Florence to escape the plague in 1348. Novel-writing, so abundant in the preceding centuries, especially in France, now for the first time assumed an artistic shape. The style of Boccaccio tends to the imitation of Latin, but in him prose first took the form of elaborated art. The rudeness of the old fabliaux gives place to the careful and conscientious work of a mind that has a feeling for what is beautiful, that has studied the classic authors, and that strives to imitate them as much as possible. Over and above this, in the Decamerone, Boccaccio is a delineator of character and an observer of passions. In this lies his novelty. Much has been written about the sources of the novels of the Decamerone. Probably Boccaccio made use both of written and of oral sources. Popular tradition must have furnished him with the materials of many stories, as, for example, that of Griselda. Unlike Petrarch, who was always discontented, preoccupied, wearied with life, disturbed by disappointments, we find Boccaccio calm, serene, satisfied with himself and with his surroundings. Notwithstanding these fundamental differences in their characters, the two great authors were old and warm friends. But their affection for Dante was not equal. Petrarch, who says that he saw him once in his childhood, did not preserve a pleasant recollection of him, and it would be useless to deny that he was jealous of his renown. The Divina Commedia was sent him by Boccaccio, when he was an old man, and he confessed that he never read it. On the other hand, Boccaccio felt for Dante something more than love—enthusiasm. He wrote a biography of him (which some critics deprecate the accuracy of) and gave public critical lectures on the poem in Santa Maria del Fiore at Florence. Others Imitators Fazio degli Uberti and Federico Frezzi were imitators of the Divina Commedia, but only in its external form. The former wrote the Dittamondo, a long poem, in which the author supposes that he was taken by the geographer Solinus into different parts of the world, and that his Commedia guide related the history of them. The legends of the rise of the different Italian cities have some importance historically. Frezzi, bishop of his native town Foligno, wrote the Quadriregio, a poem of the four kingdoms Love, Satan, the Vices, and the Virtues. This poem has many points of resemblance with the Divina Commedia. Frezzi pictures the condition of man who rises from a state of vice to one of virtue, and describes hell, limbo, purgatory and heaven. The poet has Pallas for a companion. Ser Giovanni Fiorentino wrote, under the title of Pecorone, a collection of tales, which are supposed to have been related by a monk and a nun in the parlour of the monastery Novelists of Forli. He closely imitated Boccaccio, and drew on Villani's chronicle for his historical stories. Franco Sacchetti wrote tales too, for the most part on subjects taken from Florentine history. His book gives a lifelike picture of Florentine society at the end of the 14th century. The subjects are almost always improper, but it is evident that Sacchetti collected these anecdotes so he could draw his own conclusions and moral reflections, which he puts at the end of each story. From this point of view, Sacchetti's work comes near to the Monalisaliones of the Middle Ages. A third novelist was Giovanni Sercambi of Lucca, who after 1374 wrote a book, in imitation of Boccaccio, about a party of people who were supposed to fly from a plague and to go travelling about in different Italian cities, stopping here and there telling stories. Later, but important, names are those of Masuccio Salernitano (Tommaso Guardato), who wrote the Novellino, and Antonio Cornazzano whose Proverbii became extremely popular. Chronicles Chronicles formerly believed to have been of the 13th century are now mainly regarded as forgeries. At the end of the 13th century there is a chronicle by Dino Compagni, probably authentic. Giovanni Villani, born in 1300, was more of a chronicler than an historian. He relates the events up to 1347. The journeys that he made in Italy and France, and the information thus acquired, mean that his chronicle, the Historie Fiorentine, covers events all over Europe. He speaks at length, not only of events in politics and war, but of the stipends of public officials, the sums of money used to pay for soldiers and public festivals, and many other things of which knowledge is valuable. Villani's narrative is often encumbered with fables and errors, particularly when he speaks of things that happened before his time. Matteo was the brother of Giovanni Villani, and continued the chronicle up to 1363. It was again continued by Filippo Villani. Ascetics The Divine Commedia is ascetic in its conception, and in a good many points of its execution. Petrarch's work has similar qualities; yet neither Petrarch nor Dante could be classified among the pure ascetics of their time. But many other writers come under this head. St Catherine of Siena's mysticism was political. This extraordinary woman aspired to bring back the Church of Rome to evangelical virtue, and left a collection of letters written in a high and lofty tone to all kinds of people, including popes. Hers is the clearest religious utterance to have made itself heard in 14th-century Italy. Although precise ideas of reformation did not enter her head, the want of a great moral reform was felt in her heart. She must take her place among those who prepared the way for the religious movement of the 16th century. Another Sienese, Giovanni Colombini, founder of the order of Jesuati, preached poverty by precept and example, going back to the religious idea of St Francis of Assisi. His letters are among the most remarkable in the category of ascetic works in the 14th century. Bianco da Siena wrote several religiously-inspired poems (lauda) that were popular in the Middle Ages. Jacopo Passavanti, in his Specchio della vera penitenza, attached instruction to narrative. Domenico Cavalca translated from the Latin the Vite de' Santi Padri. Rivalta left behind him many sermons, and Franco Sacchetti (the famous novelist) many discourses. On the whole, there is no doubt that one of the most important productions of the Italian spirit of the 14th century was religious literature. Popular works Humorous poetry, largely developed in the 13th century, was carried on in the 14th by Bindo Bonichi, Arrigo di Castruccio, Cecco Nuccoli, Andrea Orgagna, Filippo de Bardi, Adriano de Rossi, Antonio Pucci and other lesser writers. Orgagna was specially comic; Bonichi was comic with a satirical and moral purpose. Pucci was superior to all of them for the variety of his production. He put into triplets the chronicle of Giovanni Villani (Centiloquio), and wrote many historical poems called Serventesi, many comic poems, and not a few epico-popular compositions on various subjects. A little poem of his in seven cantos treats of the war between the Florentines and the Pisans from 1362 to 1365. Other poems drawn from a legendary source celebrate the Reina d'Oriente, Apollonio di Tiro, the Bel Gherardino, etc. These poems, meant to be recited, are the ancestors of the romantic epic. Political works Many poets of the 14th century produced political works. Fazio degli Uberti, the author of Dittamondo, who wrote a Serventese to the lords and people of Italy, a poem on Rome, and a fierce invective against Charles IV, deserves notice, as do Francesco di Vannozzo, Frate Stoppa and Matteo Frescobaldi. It may be said in general that following the example of Petrarch many writers devoted themselves to patriotic poetry. From this period also dates that literary phenomenon known under the name of Petrarchism. The Petrarchists, or those who sang of love, imitating Petrarch's manner, were found already in the 14th century. But others treated the same subject with more originality, in a manner that might be called semi-popular. Such were the Ballate of Ser Giovanni Fiorentino, of Franco Sacchetti, of Niccolo Soldanieri, and of Guido and Bindo Donati. Ballate were poems sung to dancing, and we have very many songs for music of the 14th century. We have already stated that Antonio Pucci versified Villani's Chronicle. It is enough to notice a chronicle of Arezzo in terza rima by Gorello de Sinigardi, and the history, also in terza rima, of the journey of Pope Alexander III to Venice, by Pier de Natali. Besides this, every kind of subject, whether history, tragedy or husbandry, was treated in verse. Neri di Landocio wrote a life of St Catherine; Jacopo Gradenigo put the Gospels into triplets. The 15th century: Renaissance humanism Renaissance humanism developed during the 14th and the beginning of the 15th centuries, and was a response to the challenge of Mediæval scholastic education, emphasizing practical, pre-professional and -scientific studies. Scholasticism focused on preparing men to be doctors, lawyers or professional theologians, and was taught from approved textbooks in logic, natural philosophy, medicine, law and theology. The main centers of humanism were Florence and Naples. Rather than train professionals in jargon and strict practice, humanists sought to create a citizenry (including, sometimes, women) able to speak and write with eloquence and clarity. Thus, they would be capable of better engaging the civic life of their communities and persuading others to virtuous and prudent actions. This was to be accomplished through the study of the studia humanitatis, today known as the humanities: grammar, rhetoric, history, poetry and moral philosophy. Early humanists, such as Petrarch, Coluccio Salutati and Leonardo Bruni, were great collectors of antique manuscripts. Many worked for the organized Church and were in holy orders (like Petrarch), while others were lawyers and chancellors of Italian cities, like Petrarch's disciple, Salutati, the Chancellor of Florence, and thus had access to book copying workshops. In Italy, the humanist educational program won rapid acceptance and, by the mid-15th century, many of the upper classes had received humanist educations. Some of the highest officials of the Church were humanists with the resources to amass important libraries. Such was Cardinal Basilios Bessarion, a convert to the Latin Church from Greek Orthodoxy, who was considered for the papacy and was one of the most learned scholars of his time. There were five 15th-century Humanist Popes, one of whom, Aeneas Silvius Piccolomini (Pius II), was a prolific author and wrote a treatise on "The Education of Boys". Literature in the Florence of the Medici At Florence the most celebrated humanists wrote also in the vulgar tongue, and commented on Dante and Petrarch, and defended them from their enemies. Leone Battista Alberti, the learned Greek and Latin scholar, wrote in the vernacular, and Vespasiano da Bisticci, while he was constantly absorbed in Greek and Latin manuscripts, wrote the Vite di uomini illustri, valuable for their historical contents, and rivalling the best works of the 14th century in their candour and simplicity. Andrea da Barberino wrote the beautiful prose of the Reali di Francia, giving a coloring of romanità to the chivalrous romances. Belcari and Girolamo Benivieni returned to the mystic idealism of earlier times. But it is in Lorenzo de Medici that the influence of Florence on the Renaissance is particularly seen. His mind was formed by the ancients: he attended the class of the Greek John Argyropulos, sat at Platonic banquets, took pains to collect codices, sculptures, vases, pictures, gems and drawings to ornament the gardens of San Marco and to form the library later named after him. In the saloons of his Florentine palace, in his villas at Careggi, Fiesole and Anibra, stood the wonderful chests painted by Dello di Niccolò Delli with stories from Ovid, the Hercules of Pollaiuolo, the Pallas of Botticelli, the works of Filippino and Verrocchio. De Medici lived entirely in the classical world; and yet if we read his poems we only see the man of his time, the admirer of Dante and of the old Tuscan poets, who takes inspiration from the popular muse, and who succeeds in giving to his poetry the colors of the most pronounced realism as well as of the loftiest idealism, who passes from the Platonic sonnet to the impassioned triplets of the Amori di Venere, from the grandiosity of the Salve to Nencia and to Beoni, from the Canto carnascialesco to the lauda. The feeling of nature is strong in him; at one time sweet and melancholy, at another vigorous and deep, as if an echo of the feelings, the sorrows, the ambitions of that deeply agitated life. He liked to look into his own heart with a severe eye, but he was also able to pour himself out with tumultuous fulness. He described with the art of a sculptor; he satirized, laughed, prayed, sighed, always elegant, always a Florentine, but a Florentine who read Anacreon, Ovid and Tibullus, who wished to enjoy life, but also to taste of the refinements of art. Next to Lorenzo comes Poliziano, who also united, and with greater art, the ancient and the modern, the popular and the classical style. In his Rispetti and in his Ballate the freshness of imagery and the plasticity of form are inimitable. A great Greek scholar, Poliziano wrote Italian verses with dazzling colors; the purest elegance of the Greek sources pervaded his art in all its varieties, in the Orfeo as well as the Stanze per la giostra. A completely new style of poetry arose, the Canto carnascialesco. These were a kind of choral songs, which were accompanied with symbolic masquerades, common in Florence at the carnival. They were written in a metre like that of the ballate; and for the most part they were put into the mouth of a party of workmen and tradesmen, who, with not very chaste allusions, sang the praises of their art. These triumphs and masquerades were directed by Lorenzo himself. In the evening, there set out into the city large companies on horseback, playing and singing these songs. There are some by Lorenzo himself, which surpass all the others in their mastery of art. That entitled Bacco ed Arianna is the most famous. Epic: Pulci and Boiardo Italy did not yet have true epic poetry; but had, however, many poems called cantari, because they contained stories that were sung to the people; and besides there were romantic poems, such as the Buovo d'Antona, the Regina Ancroja and others. But the first to introduce life into this style was Luigi Pulci, who grew up in the house of the Medici, and who wrote the Morgante Maggiore at the request of Lucrezia Tornabuoni, mother of Lorenzo the Magnificent. The material of the Morgante is almost completely taken from an obscure chivalrous poem of the 15th century, rediscovered by Pio Rajna. Pulci erected a structure of his own, often turning the subject into ridicule, burlesquing the characters, introducing many digressions, now capricious, now scientific, now theological. Pulci raised the romantic epic into a work of art, and united the serious and the comic. With a more serious intention Matteo Boiardo, count of Scandiano, wrote his Orlando innamorato, in which he seems to have aspired to embrace the whole range of Carolingian legends; but he did not complete his task. We find here too a large vein of humour and burlesque. Still Boiardo was drawn to the world of romance by a profound sympathy for chivalrous manners and feelings; that is to say, for love, courtesy, valour and generosity. A third romantic poem of the 15th century was the Mambriano by Francesco Bello (Cieco of Ferrara). He drew from the Carolingian cycle, from the romances of the Round Table, and from classical antiquity. He was a poet of no common genius, and of ready imagination. He showed the influence of Boiardo, especially in the use of fantasy. Other History had neither many nor very good students in the 15th century. Its revival belonged to the following age. It was mostly written in Latin. Leonardo Bruni of Arezzo wrote the history of Florence, Gioviano Pontano that of Naples, in Latin. Bernardino Corio wrote the history of Milan in Italian, but in a rude way. Leonardo da Vinci wrote a treatise on painting, Leone Battista Alberti one on sculpture and architecture. But the names of these two men are important, not so much as authors of these treatises, but as being embodiments of another characteristic of the age of the Renaissance; versatility of genius, power of application along many and varied lines, and of being excellent in all. Leonardo was an architect, a poet, a painter, an hydraulic engineer and a distinguished mathematician. Alberti was a musician, studied jurisprudence, was an architect and a draughtsman, and had great fame in literature. He had a deep feeling for nature, and an almost unique faculty of assimilating all that he saw and heard. Leonardo and Alberti are representatives and almost a compendium in themselves of all that intellectual vigour of the Renaissance age, which in the 16th century took to developing itself in its individual parts, making way for what has by some been called the golden age of Italian literature. Piero Capponi, author of the Commentari deli acquisto di Pisa and of the narration of the Tumulto dei Ciompi, belonged to both the 14th and the 15th centuries. Albertino Mussato of Padua wrote in Latin a history of Emperor Henry VII. He then produced a Latin tragedy on Ezzelino da Romano, Henry's imperial vicar in northern Italy, the Eccerinus, which was probably not represented on the stage. This remained an isolated work. The development of the drama in the 15th century was very great. This kind of semi-popular literature was born in Florence, and attached itself to certain popular festivities that were usually held in honor of St John the Baptist, patron saint of the city. The Sacra Rappresentazione is the development of the medieval Mistero (mystery play). Although it belonged to popular poetry, some of its authors were literary men of much renown: Lorenzo de Medici, for example, wrote San Giovanni e Paolo, and Feo Belcari wrote San Panunzio, Abramo ed Isaac, and more. From the 15th century, some element of the comic-profane found its way into the Sacra Rappresentazione. From its Biblical and legendary conventionalism Poliziano emancipated himself in his Orfeo, which, although in its exterior form belonging to the sacred representations, yet substantially detaches itself from them in its contents and in the artistic element introduced. The 16th century: the High Renaissance The fundamental characteristic of the literary epoch following that of the Renaissance is that it perfected itself in every kind of art, in particular uniting the essentially Italian character of its language with classicism of style. This period lasted from about 1494 to about 1560—1494 being when Charles VIII descended into Italy, marking the beginning of Italy's foreign domination and political decadence. The famous men of the first half of the 16th century had been educated in the preceding century. Pietro Pomponazzi was born in 1462, Marcello Adriani Virgilio in 1464, Baldassare Castiglione in 1468, Niccolò Machiavelli in 1469, Pietro Bembo in 1470, Michelangelo Buonarroti and Ariosto in 1474, Jacopo Nardi in 1476, Gian Giorgio Trissino in 1478, and Francesco Guicciardini in 1482. Literary activity that appeared from the end of the 15th century to the middle of the 16th century was the product of the political and social conditions of an earlier age. The science of history: Machiavelli and Guicciardini Machiavelli and Guicciardini were the chief originators of the science of history. Machiavelli's principal works are the Istorie fiorentine, the Discorsi sulla prima deca di Tito Livio, the Arte della guerra and the Principe. His merit consists in having emphasized the experimental side of the study of political action in having observed facts, studied histories and drawn principles from them. His history is sometimes inexact in facts; it is rather a political than an historical work. The peculiarity of Machiavelli's genius lay, as has been said, in his artistic feeling for the treatment and discussion of politics in and for themselves, without regard to an immediate end in his power of abstracting himself from the partial appearances of the transitory present, in order more thoroughly to possess himself of the eternal and inborn kingdom, and to bring it into subjection to himself. Next to Machiavelli both as an historian and a statesman comes Guicciardini. Guicciardini was very observant, and endeavoured to reduce his observations to a science. His Storia d'Italia, which extends from the death of Lorenzo de Medici to 1534, is full of political wisdom, is skillfully arranged in its parts, gives a lively picture of the character of the persons it treats of, and is written in a grand style. He shows a profound knowledge of the human heart, and depicts with truth the temperaments, the capabilities and habits of the different European nations. Going back to the causes of events, he looked for the explanation of the divergent interests of princes and of their reciprocal jealousies. The fact of his having witnessed many of the events he related, and having taken part in them, adds authority to his words. The political reflections are always deep; in the Pensieri, as Gino Capponi says, he seems to aim at extracting through self-examination a quintessence, as it were, of the things observed and done by him; thus endeavouring to form a political doctrine as adequate as possible in all its parts. Machiavelli and Guicciardini may be considered as distinguished historians as well as originators of the science of history founded on observation. Inferior to them, but still always worthy of note, were Jacopo Nardi (a just and faithful historian and a virtuous man, who defended the rights of Florence against the Medici before Charles V), Benedetto Varchi, Giambattista Adriani, Bernardo Segni, and, outside Tuscany, Camillo Porzio, who related the Congiura de baroni and the history of Italy from 1547 to 1552; Angelo di Costanza, Pietro Bembo, Paolo Paruta, and others. Ludovico Ariosto Ariosto's Orlando furioso was a continuation of Boiardo's Innamorato. His characteristic is that he assimilated the romance of chivalry to the style and models of classicism. Romantic Ariosto was an artist only for the love of his art; his epic. His sole aim was to make a romance that would please himself and his generation. His Orlando has no grave and serious purpose. On the contrary, it creates a fantastic world in which the poet rambles, indulges his caprice, and sometimes smiles at his own work. His great desire is to depict everything with the greatest possible perfection; the cultivation of style is what occupies him most. In his hands the style becomes wonderfully plastic to every conception, whether high or low, serious or sportive. With him, the octave stanza reached a high level of grace, variety, and harmony. Pietro Bembo Pietro Bembo was an influential figure in the development of the Italian language, specifically Tuscan, as a literary medium, and his writings assisted in the 16th-century revival of interest in the works of Petrarch. As a writer, Bembo attempted to restore some of the legendary "affect" that ancient Greek had on its hearers, but in Tuscan Italian instead. He held as his model, and as the highest example of poetic expression ever achieved in Italian, the work of Petrarch and Boccaccio, two 14th-century writers he assisted in bringing back into fashion. In the Prose della volgar lingua, he set Petrarch up as the perfect model, and discussed verse composition in detail, including rhyme, stress, the sounds of words, balance and variety. In Bembo's theory, the specific placement of words in a poem, with strict attention to their consonants and vowels, their rhythm, their position within lines long and short, could produce emotions ranging from sweetness and grace to gravity and grief in a listener. This work was of decisive importance in the development of the Italian madrigal, the most famous secular musical form of the 16th century, as it was these poems, carefully constructed (or, in the case of Petrarch, analyzed) according to Bembo's ideas, that were to be the primary texts for the music. Torquato Tasso The historians of Italian literature are in doubt whether Tasso should be placed in the period of the highest development of the Renaissance, or whether he should form a period by himself, intermediate between that and the one following. Certainly he was profoundly out of harmony with his own century. His religious faith, the seriousness of his character, the deep melancholy settled in his heart, his continued aspiration after an ideal perfection—all place him outside the literary epoch represented by Machiavelli, Ariosto, and Berni. As Carducci said, Tasso is the legitimate heir of Dante: he believes, and reasons on his faith by philosophy; he loves, and comments on his love in a learned style; he is an artist, and writes dialogues of scholastic speculation that would be considered Platonic. He was only eighteen years old when, in 1562, he tried his hand at epic poetry, and wrote Rinaldo, in which be said that he had tried to reconcile the Aristotelian rules with the variety of Ariosto. He later wrote the Aminta, a pastoral drama of exquisite grace, but the work to which he had long turned his thoughts was an heroic poem, and that absorbed all his powers. He explains his intentions in the three Discorsi, written while he composed the Gerusalemme: he would choose a great and wonderful subject, not so ancient as to have lost all interest, nor so recent as to prevent the poet from embellishing it with invented circumstances. He would treat it rigorously according to the rules of the unity of action observed in Greek and Latin poems, but with a far greater variety and splendour of episodes, so that in this point it should not fall short of the romantic poem; and finally, he would write it in a lofty and ornate style. This is what Tasso has done in the Gerusalemme liberata, the subject of which is the liberation of the sepulchre of Jesus Christ in the 11th century by Godfrey of Bouillon. The poet does not follow faithfully all the historical facts, but sets before us the principal causes of them, bringing in the supernatural agency of God and Satan. The Gerusalemme is the best heroic poem that Italy can show. It approaches to classical perfection. Its episodes above all are most beautiful. There is profound feeling in it, and everything reflects the melancholy soul of the poet. As regards the style, however, although Tasso studiously endeavoured to keep close to the classical models, one cannot help noticing that he makes excessive use of metaphor, of antithesis, of far-fetched conceits; and it is specially from this point of view that some historians have placed Tasso in the literary period generally known under the name of Secentismo, and that others, more moderate in their criticism, have said that he prepared the way for it. Minor writers Meanwhile, there was an attempt at the historical epic. Gian Giorgio Trissino of Vicenza composed a poem called Italia liberata dai Goti. Full of learning and of the rules of the ancients, he formed himself on the latter, in order to sing of the campaigns of Belisarius; he said that he had forced himself to observe all the rules of Aristotle, and that he had imitated Homer. In this again, we see one of the products of the Renaissance; and, although Trissino's work is poor in invention and without any original poetical coloring, yet it helps one to understand better what were the conditions of mind in the 16th century. Lyric poetry was certainly not one of the kinds that rose to any great height in the 16th century. Originality was entirely wanting, since it seemed in that century as if nothing better could be done than to copy Petrarch. Still, even in this style there were some vigorous poets. Monsignore Giovanni Guidiccioni of Lucca (1500–1541) showed that he had a generous heart. In fine sonnets he expressed his grief for the sad state of his country. Francesco Molza of Modena (1489–1544), learned in Greek, Latin and Hebrew, wrote in a graceful style and with spirit. Giovanni della Casa (1503–1556) and Pietro Bembo (1470–1547), although Petrarchists, were elegant. Even Michelangelo was at times a Petrarchist, but his poems bear the stamp of his extraordinary and original genius. And a good many ladies are to be placed near these poets, such as Vittoria Colonna (loved by Michelangelo), Veronica Gambara, Tullia d'Aragona, and Giulia Gonzaga, poets of great delicacy, and superior in genius to many literary men of their time. Isabella di Morra is a singular example of female poetry of the time, whose sorrowful life was one of the most poignant and tragic stories to emerge from the Italian Renaissance. Many tragedies were written in the 16th century, but they are all weak. The cause of this was the moral and religious indifference of the Italians, the lack of strong passions and vigorous characters. The first to occupy the tragic stage was Trissino with his Sofonisba, following the rules of the art most scrupulously, but written in sickly verses, and without warmth of feeling. The Oreste and the Rosmunda of Giovanni Rucellai were no better, nor Luigi Alamanni's translation of Antigone. Sperone Speroni in his Canace and Giraldi Cintio in his Orbecche tried to become innovators in tragic literature, but provoked criticisms of grotesquerie and debate over the role of decorum. They were often seen as inferior to the Torrismondo of Torquato Tasso, specially remarkable for the choruses, which sometimes remind one of the chorus of the Greek tragedies. The Italian comedy of the 16th century was almost entirely modelled on the Latin comedy. They were almost always alike in the plot, in the characters of the old man, of the servant, of the waiting-maid; and the argument was often the same. Thus the Lucidi of Agnolo Firenzuola, and the Vecchio amoroso of Donato Giannotti were modelled on comedies by Plautus, as were the Sporta by Giambattista Gelli, the Marito by Lodovico Dolce, and others. There appear to be only three writers who should be distinguished among the many who wrote comedies: Machiavelli, Ariosto, and Giovan Maria Cecchi. In his Mandragola Machiavelli, unlike the others, composed a comedy of character, creating personalities that seem living even now because he copied them from reality with a finely observant eye. Ariosto, on the other hand, was distinguished for his picture of the habits of his time, and especially of those of the Ferrarese nobles, rather than for the objective delineation of character. Lastly, Cecchi left in his comedies a treasure of spoken language, which lets us, in a wonderful way, acquaint ourselves with that age. The notorious Pietro Aretino might also be included in the list of the best writers of comedy. The 15th century included humorous poetry. Antonio Cammelli, surnamed the Pistoian, is specially deserving of notice, because of his pungent bonhomie, as Sainte-Beuve called it. But it was Francesco Berni who and satire, carried this kind of literature to perfection in the 16th century. From him the style has been called bernesque poetry. In the Berneschi we find nearly the same phenomenon that we already noticed with regard to Orlando furioso. It was art for arts sake that inspired and moved Berni to write, as well as Antonio Francesco Grazzini, called Il Lasca, and other lesser writers. It may be said that there is nothing in their poetry; and it is true that they specially delight in praising low and disgusting things and in jeering at what is noble and serious. Bernesque poetry is the clearest reflection of that religious and moral scepticism that was a characteristic of Italian social life in the 16th century, and that showed itself in most of the works of that period—a scepticism that stopped the religious Reformation in Italy, and which in its turn was an effect of historical conditions. The Berneschi, and especially Berni himself, sometimes assumed a satirical tone. But theirs could not be called true satire. Pure satirists, on the other hand, were Antonio Vinciguerra, a Venetian, Lodovico Alamanni and Ariosto, the last superior to the others for the Attic elegance of his style, and for a certain frankness, passing into malice, which is particularly interesting when the poet talks of himself. In the 16th century there were not a few didactic works. In his poem Le Api Giovanni Rucellai approaches the perfection of Virgil. His style is clear and light, and he adds interest to his book by frequent allusions to the events of the time. The most important didactic work, however, is Castiglione's Cortigiano, in which he imagines a discussion in the palace of the dukes of Urbino between knights and ladies as to what gifts a perfect courtier requires. This book is valuable as an illustration of the intellectual and moral state of the highest Italian society in the first half of the 16th century. Of the novelists of the 16th century, the two most important were Grazzini, and Matteo Bandello; the former as playful and bizarre as the latter is grave and solemn. Bandello was a Dominican friar and a bishop, but that notwithstanding his novels were very loose in subject, and that he often holds up the ecclesiastics of his time to ridicule. At a time when admiration for qualities of style, the desire for classical elegance, was so strong as in the 16th century, much attention was naturally paid to translating Latin and Greek authors. Among the very numerous translations of the time those of the Aeneid and of the Pastorals of Longus the Sophist by Annibale Caro are still famous; as are also the translations of Ovid's Metamorphoses by Giovanni Andrea dell' Anguillara, of Apuleius's The Golden Ass by Firenzuola, and of Plutarch's Lives and Moralia by Marcello Adriani. The 17th century: A period of decadence From about 1559 began a period of decadence in Italian literature. Tommaso Campanella was tortured by the Inquisition, and Giordano Bruno was burned at the stake. Cesare Balbo says that, if the happiness of the masses consists in peace without industry, if the nobility's consists in titles without power, if princes are satisfied by acquiescence in their rule without real independence, without sovereignty, if literary men and artists are content to write, paint and build with the approbation of their contemporaries, but to the contempt of posterity, if a whole nation is happy in ease without dignity and the tranquil progress of corruption, then no period ever was so happy for Italy as the 140 years from the Peace of Cateau Cambrésis to the War of the Spanish Succession. This period is known in the history of Italian literature as the Secentismo. Its writers resorted to exaggeration; they tried to produce effect with what in art is called mannerism or barocchism. Writers vied with one another in their use of metaphors, affectations, hyperbole and other oddities and draw it off from the substantial element of thought. Marinism At the head of the school of the Secentisti was Giambattista Marino of Naples, born in 1569, especially known for his long poem, Adone. He used the most extravagant metaphors, the most forced antitheses and the most far-fetched conceits. He strings antitheses together one after the other, so that they fill up whole stanzas without a break. Claudio Achillini of Bologna followed in Marino's footsteps, but his peculiarities were even more extravagant. Almost all the poets of the 17th century were more or less infected with Marinism. Alessandro Guidi, although he does not attain to the exaggeration of his master, is bombastic and turgid, while Fulvio Testi is artificial and affected. Yet Guidi as well as Testi felt the influence of another poet, Gabriello Chiabrera, born at Savona in 1552. Enamoured of the Greeks, he made new metres, especially in imitation of Pindar, treating of religious, moral, historical, and amatory subjects. Chiabrera, though elegant in form, attempts to disguise a lack of substance with poetical ornaments of every kind. Nevertheless, Chiabrera's school marks an improvement; and sometimes he shows lyrical capacities, wasted on his literary environment. Arcadia The belief arose that it would be necessary to change the form in order to restore literature. In 1690 the Academy of Arcadia was instituted. Its founders were Giovan Maria Crescimbeni and Gian Vincenzo Gravina. The Arcadia was so called because its chief aim was to imitate the simplicity of the ancient shepherds who were supposed to have lived in Arcadia in the golden age. As the Secentisti erred by an overweening desire for novelty, so the Arcadians proposed to return to the fields of truth, always singing of subjects of pastoral simplicity. This was merely the substitution of a new artifice for the old one; and they fell from bombast into effeminacy, from the hyperbolical into the petty, from the turgid into the over-refined. The Arcadia was a reaction against Secentismo, but a reaction that only succeeded in impoverishing still further and completely withering Italian literature. The poems of the Arcadians fill many volumes, and are made up of sonnets, madrigals, canzonette and blank verse. The one who most distinguished himself among the sonneteers was Felice Zappi. Among the authors of songs, Paolo Rolli was illustrious. Innocenzo Frugoni was more famous than all the others, a man of fruitful imagination but of shallow intellect. The members of the Arcadia was almost exclusively men, but at least one woman, Maria Antonia Scalera Stellini, managed to be elected on poetical merits. Vincenzo da Filicaja, a Florentine, had a lyric talent, particularly in the songs about Vienna besieged by the Turks, which raised him above the vices of the time; but even in him we see clearly the rhetorical artifice and false conceits. In general all the lyric poetry of the 17th century had the same defects, but in different degrees. These defects may be summed up as absence of feeling and exaggeration of form. The independent thinkers Whilst the political and social conditions in Italy in the 17th century made it appear that every light of intelligence was extinguished, some strong and independent thinkers, such as Bernardino Telesio, Lucilio Vanini, Bruno and Campanella turned philosophical inquiry into fresh channels, and opened the way for the scientific conquests of Galileo Galilei, the great contemporary of René Descartes in France and of Francis Bacon in England. Galileo was not only a great man of science, but also occupied a conspicuous place in the history of letters. A devoted student of Ariosto, he seemed to transfuse into his prose the qualities of that great poet: clear and frank freedom of expression, precision and ease, and at the same time elegance. Galileo's prose is in perfect antithesis to the poetry of his time and is regarded by some as the best prose that Italy has ever had. Another symptom of revival, a sign of rebellion against the vileness of Italian social life, is given us in satire, particularly that of Salvator Rosa and Alessandro Tassoni. Rosa, born in 1615 near Naples, was a painter, a musician and a poet. As a poet he mourned the sad condition of his country, and gave vent to his feeling (as another satire-writer, Giuseppe Giusti, said) in generosi rabbuffi. He was a precursor of the patriotic literature that inaugurated the revival of the 18th century. Tassoni showed independent judgment in the midst of universal servility, and his Secchia Rapita proved that he was an eminent writer. This is an heroic comic poem, which is at the same time an epic and a personal satire. He was bold enough to attack the Spaniards in his Filippiche, in which he urged Duke Carlo Emanuele of Savoy to persist in the war against them. Agriculture Paganino Bonafede in the Tesoro de rustici gave many precepts in agriculture, beginning that kind of georgic poetry later fully developed by Alamanni in his Coltivazione, by Girolamo Baruffaldi in the Canapajo, by Rucellai in Le api, by Bartolomeo Lorenzi in the Coltivazione de' monti, and by Giambattista Spolverini in the Coltivazione del riso. The revival in the 18th century: the Age of Reason and Reform In the 18th century, the political condition of Italy began to improve, under Joseph II, Holy Roman Emperor, and his successors. These princes were influenced by philosophers, who in their turn felt the influence of a general movement of ideas at large in many parts of Europe, sometimes called The Enlightenment. History and society: Vico, Muratori and Beccaria Giambattista Vico showed the awakening of historical consciousness in Italy. In his Scienza nuova, he investigated the laws governing the progress of the human race, and according to which events develop. From the psychological study of man he tried to infer the comune natura delle nazioni, i.e., the universal laws of history, by which civilizations rise, flourish and fall. From the same scientific spirit that inspired Vico came a different kind of investigation, that of the sources of Italian civil and literary history. Lodovico Antonio Muratori, after having collected in his Rerum Italicarum scriptores the chronicles, biographies, letters and diaries of Italian history from 500 to 1500, and having discussed the most obscure historical questions in the Antiquitates Italicae medii aevi, wrote the Annali d'Italia, minutely narrating facts derived from authentic sources. Muratori's associates in his historical research were Scipione Maffei of Verona and Apostolo Zeno of Venice. In his Verona illustrata Maffei left a treasure of learning that was also an excellent historical monograph. Zeno added much to the erudition of literary history, both in his Dissertazioni Vossiane and in his notes to the Biblioteca dell'eloquenza italiana of Monsignore Giusto Fontanini. Girolamo Tiraboschi and Count Giovanni Maria Mazzuchelli of Brescia devoted themselves to literary history. While the new spirit of the times led to the investigation of historical sources, it also encouraged inquiry into the mechanism of economic and social laws. Francesco Galiani wrote on currency; Gaetano Filangieri wrote a Scienza della legislazione. Cesare Beccaria, in his Trattato dei delitti e delle pene, made a contribution to the reform of the penal system and promoted the abolition of torture. Metastasio and the melodramma The reforming movement sought to throw off the conventional and the artificial, and to return to truth. Apostolo Zeno and Metastasio (the Arcadian name for Pietro Trapassi, a native of Rome) had endeavoured to make melodrama and reason compatible. Metastasio gave fresh expression to the affections, a natural turn to the dialogue and some interest to the plot; if he had not fallen into constant unnatural overrefinement and mawkishness, and into frequent anachronisms, he might have been considered the first dramatic reformer of the 18th century. Carlo Goldoni Carlo Goldoni, a Venetian, overcame resistance from the old popular form of comedy, with the masks of pantalone, of the doctor, harlequin, Brighella, etc., and created the comedy of character, following Molière's example. Goldoni's characters are often superficial, but he wrote lively dialogue. He produced over 150 comedies, and had no time to polish and perfect his works; but for a comedy of character we must go straight from Machiavelli's Mandragola to him. Goldoni's dramatic aptitude is illustrated by the fact that he took nearly all his types from Venetian society, yet managed to give them an inexhaustible variety. Many of his comedies were written in Venetian. Giuseppe Parini The leading figure of the literary revival of the 18th century was Giuseppe Parini. Born in a Lombard village in 1729, he was educated at Milan, and as a youth was known among the Arcadian poets by the name of Darisbo Elidonio. Even as an Arcadian, Parini showed originality. In a collection of poems he published at twenty-three years of age, under the name of Ripano Eupilino, the poet shows his faculty of taking his scenes from real life, and in his satirical pieces he exhibits a spirit of outspoken opposition to his own times. These poems, though derivative, indicate a resolute determination to challenge the literary conventionalities. Improving on the poems of his youth, he showed himself an innovator in his lyrics, rejecting at once Petrarchism, Secentismo and Arcadia, the three maladies that he thought had weakened Italian art in the preceding centuries. In the Odi the satirical note is already heard, but it comes out more strongly in Del giorno, in which he imagines himself to be teaching a young Milanese patrician all the habits and ways of gallant life; he shows up all its ridiculous frivolities, and with delicate irony unmasks the futilities of aristocratic habits. Dividing the day into four parts, the Mattino, the Mezzogiorno, the Vespero, and the Notte, he describes the trifles of which they were made up, and the book thus assumes major social and historical value. As an artist, going straight back to classical forms, aspiring to imitate Virgil and Dante, he opened the way to the school of Vittorio Alfieri, Ugo Foscolo and Vincenzo Monti. As a work of art, the Giorno is wonderful for its delicate irony. The verse has new harmonies; sometimes it is a little hard and broken, as a protest against the Arcadian monotony. Linguistic purism Whilst the most burning political passions were raging, and whilst the most brilliant men of genius in the new classical and patriotic school were purists at the height of their influence, a question arose about purism of language. In the second half of the 18th century the Italian language was specially full of French expressions. There was great indifference about fitness, still more about elegance of style. Prose needed to be restored for the sake of national dignity, and it was believed that this could not be done except by going back to the writers of the 14th century, to the aurei trecentisti, as they were called, or else to the classics of Italian literature. One of the promoters of the new school was Antonio Cesari of Verona, who republished ancient authors, and brought out a new edition, with additions, of the Vocabolario della Crusca. He wrote a dissertation Sopra lo stato presente della lingua italiana, and endeavoured to establish the supremacy of Tuscan and of the three great writers, Dante, Petrarch, and Boccaccio. In accordance with that principle he wrote several books, taking pains to copy the trecentisti as closely as possible. But patriotism in Italy has always had something municipal in it; so to this Tuscan supremacy, proclaimed and upheld by Cesari, there was opposed a Lombard school, which would know nothing of Tuscan, and with Dante's De vulgari eloquentia returned to the idea of the lingua illustre. This was an old question, largely and bitterly argued in the Cinquecento (16th century) by Varchi, Muzio, Lodovico Castelvetro, Speroni, and others. Now the question was raised afresh. At the head of the Lombard school were Monti and his son-in-law Count Giulio Perticari. This caused Monti to write Pro pasta di alcune correzioni ed aggiunte al vocabolario della Crusca, in which he attacked the Tuscanism of the Crusca, but in a graceful and easy style, so as to form a prose that is one of the most beautiful in Italian literature. Perticari, whose intellect was inferior, narrowed and exacerbated the question in two treatises, Degli scrittori del Trecento and Dell'amor patrio di Dante. The dispute about language took its place beside literary and political disputes, and all Italy took part in it: Basilio Puoti at Naples, Paolo Costa in the Romagna, Marc Antonio Parenti at Modena, Salvatore Betti at Rome, Giovanni Gherardini in Lombardy, Luigi Fornaciari at Lucca, and Vincenzo Nannucci at Florence. A patriot, a classicist and a purist all at once was Pietro Giordani, born in 1774; he was almost a compendium of the literary movement of the time. His whole life was a battle for liberty. Learned in Greek and Latin authors, and in the Italian trecentisti, he left only a few writings, but they were carefully elaborated in point of style, and his prose was greatly admired in its time. Giordani closes the literary epoch of the classicists. Minor writers Gasparo Gozzi's satire was less elevated, but directed towards the same end as Parini's. In his Osservatore, something like Joseph Addison's Spectator, in his Gazzetta veneta, and in the Mondo morale, by means of allegories and novelties he hit the vices with a delicate touch, introducing a practical moral. Gozzi's satire has some slight resemblance in style to Lucian's. Gozzi's prose is graceful and lively, but imitates the writers of the 14th century. Another satirical writer of the first half of the 18th century was Giuseppe Baretti of Turin. In a journal called the Frusta letteraria he mercilessly criticized the works then being published in Italy. He had learnt much by travelling; his long stay in Britain had contributed to the independent character of his mind. The Frusta was the first book of independent criticism directed particularly against the Arcadians and the pedants. In 1782 was born Giambattista Niccolini. In literature he was a classicist; in politics he was a Ghibelline, a rare exception in Guelph Florence, his birthplace. In imitating Aeschylus, as well as in writing the Discorsi sulla tragedia greca, and on the Sublime Michelangelo, Niccolini displayed his passionate devotion to ancient literature. In his tragedies he set himself free from the excessive rigidity of Alfieri, and partly approached the English and German tragic authors. He nearly always chose political subjects, striving to keep alive in his compatriots the love of liberty. Such are Nabucco, Antonio Foscarini, Giovanni da Procida, Lodovico il Moro and others. He assailed papal Rome in Arnaldo da Brescia, a long tragic piece, not suited for acting, and epic rather than dramatic. Niccolini's tragedies show a rich lyric vein rather than dramatic genius. He has the merit of having vindicated liberal ideas, and of having opened a new path to Italian tragedy. Carlo Botta, born in 1766, was a spectator of French spoliation in Italy and of the overbearing rule of Napoleon. He wrote a History of Italy from 1789 to 1814; and later continued Guicciardini's History up to 1789. He wrote after the manner of the Latin authors, trying to imitate Livy, putting together long and sonorous periods in a style that aimed at being like Boccaccio's, caring little about what constitutes the critical material of history, only intent on declaiming his academic prose for his country's benefit. Botta wanted to be classical in a style that could no longer be so, and hence he failed completely to attain his literary goal. His fame is only that of a man of a noble and patriotic heart. Not so bad as the two histories of Italy is that of the Guerra dell'indipendenza americana. Close to Botta comes Pietro Colletta, a Neapolitan born nine years after him. He also in his Storia del reame di Napoli dal 1734 al 1825 had the idea of defending the independence and liberty of Italy in a style borrowed from Tacitus; and he succeeded rather better than Botta. He has a rapid, brief, nervous style, which makes his book attractive reading. But it is said that Pietro Giordani and Gino Capponi corrected it for him. Lazzaro Papi of Lucca, author of the Commentari della rivoluzione francese dal 1789 al 1814, was not altogether unlike Botta and Colletta. He also was an historian in the classical style, and treats his subject with patriotic feeling; but as an artist he perhaps excels the other two. Alberto Fortis started the Morlachist literary movement in Italian and Venetian literature with his 1774 work Viaggio in Dalmazia ("Journey to Dalmatia"). The Revolution: Patriotism and classicism The ideas behind the French Revolution of 1789 gave a special direction to Italian literature in the second half of the 18th century. Love of liberty and desire for equality created a literature aimed at national objects, seeking to improve the condition of the country by freeing it from the double yoke of political and religious despotism. The Italians who aspired to political redemption believed it inseparable from an intellectual revival, and thought that this could only be effected by a reunion with ancient classicism. This was a repetition of what had occurred in the first half of the 15th century. Vittorio Alfieri Patriotism and classicism were the two principles that inspired the literature that began with Vittorio Alfieri. He worshipped the Greek and Roman idea of popular liberty in arms against tyranny. He took the subjects of his tragedies from the history of these nations and made his ancient characters talk like revolutionists of his time. The Arcadian school, with its verbosity and triviality, was rejected. His aim was to be brief, concise, strong and bitter, to aim at the sublime as opposed to the lowly and pastoral. He saved literature from Arcadian vacuities, leading it towards a national end, and armed himself with patriotism and classicism. Vincenzo Monti Vincenzo Monti was a patriot too, but in his own way. He had no one deep feeling that ruled him, or rather the mobility of his feelings is his characteristic; but each of these was a new form of patriotism that took the place of an old one. He saw danger to his country in the French Revolution, and wrote the Pellegrino apostolico, the Bassvilliana and the Feroniade; Napoleon's victories caused him to write the Pronreteo and the Musagonia; in his Fanatismo and his Superstizione he attacked the papacy; afterwards he sang the praises of the Austrians. Thus every great event made him change his mind, with a readiness that might seem incredible, but is easily explained. Monti was, above everything, an artist. Everything else in him was liable to change. Knowing little Greek, he succeeded in translating the Iliad in a way remarkable for its Homeric feeling, and in his Bassvilliana he is on a level with Dante. In him classical poetry seemed to revive in all its florid grandeur. Ugo Foscolo Ugo Foscolo was an eager patriot, inspired by classical models. The Lettere di Jacopo Ortis, inspired by Goethe's Werther, are a love story with a mixture of patriotism; they contain a violent protest against the Treaty of Campo Formio, and an outburst from Foscolo's own heart about an unhappy love-affair of his. His passions were sudden and violent. To one of these passions Ortis owed its origin, and it is perhaps the best and most sincere of all his writings. He is still sometimes pompous and rhetorical, but less so than, for example, in the lectures Dell'origine e dell'ufficio della letteratura. On the whole, Foscolo's prose is turgid and affected, and reflects the character of a man who always tried to pose in dramatic attitudes. This was indeed the defect of the Napoleonic epoch; there was a horror of anything common, simple, natural; everything must assume some heroic shape. In Foscolo this tendency was excessive. The Sepolcri, which is his best poem, was prompted by high feeling, and the mastery of versification shows wonderful art. There are most obscure passages in it, where it seems even the author did not form a clear idea. He left incomplete three hymns to the Graces, in which he sang of beauty as the source of courtesy, of all high qualities and of happiness. Among his prose works a high place belongs to his translation of the Sentimental Journey of Laurence Sterne, a writer by whom Foscolo was deeply affected. He went as an exile to England, and died there. He wrote for English readers some Essays on Petrarch and on the texts of the Decamerone and of Dante, which are remarkable for when they were written, and which may have initiated a new kind of literary criticism in Italy. Foscolo is still greatly admired, and not without reason. The men who made the revolution of 1848 were brought up on his work. 19th century: Romanticism and the Risorgimento The romantic school had as its organ the Conciliatore established in 1818 at Milan, on the staff of which were Silvio Pellico, Ludovico di Breme, Giovile Scalvini, Tommaso Grossi, Giovanni Berchet, Samuele Biava, and Alessandro Manzoni. All were influenced by the ideas that, especially in Germany, constituted the movement called Romanticism. In Italy the course of literary reform took another direction. Alessandro Manzoni The main instigator of the reform was Manzoni. He formulated the objects of the new school, saying that it aspired to try to discover and express il vero storico and il vero morale, not only as an end, but as the widest and eternal source of the beautiful. It is realism in art that characterizes Italian literature from Manzoni onwards. The Promessi Sposi (The Betrothed) is the work that has made him immortal. No doubt the idea of the historical novel came to him from Sir Walter Scott , but Manzoni succeeded in something more than an historical novel in the narrow meaning of that word; he created an eminently realistic work of art. The reader's attention is entirely fixed on the powerful objective creation of the characters. From the greatest to the least they have a wonderful verisimilitude. Manzoni is able to unfold a character in all particulars and to follow it through its different phases. Don Abbondio and Renzo are as perfect as Azzeccagarbugli and Il Sarto. Manzoni dives down into the innermost recesses of the human heart, and draws from it the most subtle psychological reality. In this his greatness lies, which was recognized first by his companion in genius, Goethe. As a poet too he had gleams of genius, especially in the Napoleonic ode, Il Cinque Maggio, and where he describes human affections, as in some stanzas of the Inni and in the chorus of the Adelchi. Giacomo Leopardi The great poet of the age was Giacomo Leopardi, born thirteen years after Manzoni at Recanati, of a patrician family. He became so familiar with Greek authors that he used afterwards to say that the Greek mode of thought was more clear and living to his mind than the Latin or even the Italian. Solitude, sickness, and domestic tyranny prepared him for profound melancholy. He passed into complete religious scepticism, from which he sought rest in art. Everything is terrible and grand in his poems, which are the most agonizing cry in modern literature, uttered with a solemn quietness that at once elevates and terrifies us. He was also an admirable prose writer. In his Operette Morali—dialogues and discourses marked by a cold and bitter smile at human destinies that freezes the reader—the clearness of style, the simplicity of language and the depth of conception are such that perhaps he is not only the greatest lyrical poet since Dante, but also one of the most perfect writers of prose that Italian literature has had. History and politics in the 19th As realism in art gained ground, the positive method in criticism kept pace with it. History returned to its spirit of learned research, as is shown in such works as the Archivio storico italiano, established at Florence by Giampietro Vieusseux, the Storia d'Italia nel medio evo by Carlo Troya, a remarkable treatise by Manzoni himself, Sopra alcuni punti della storia longobardica in Italia, and the very fine history of the Vespri siciliani by Michele Amari. Alongside the great artists Leopardi and Manzoni, alongside the learned scholars, there was also in the first half of the 19th century a patriotic literature. Vieusseux had a distinct political object when in 1820 he established the monthly review Antologia. His Archivio storico italiano (1842) was, under a different form, a continuation of the Antologia, which was suppressed in 1833 owing to the action of the Russian government. Florence was in those days the asylum of all the Italian exiles, and these exiles met and shook hands in Vieusseux's rooms, where there was more literary than political talk, but where one thought and one only animated all minds, the thought of Italy. The literary movement that preceded and was contemporary with the political revolution of 1848 may be said to be represented by four writers - Giuseppe Giusti, Francesco Domenico Guerrazzi, Vincenzo Gioberti and Cesare Balbo. Giusti wrote epigrammatic satires in popular language. In incisive phrases he scourged the enemies of Italy. He was a telling political writer, but a mediocre poet. Guerrazzi had a great reputation and great influence, but his historical novels, though avidly read before 1848, were soon forgotten. Gioberti, a powerful polemical writer, had a noble heart and a great mind; his philosophical works are now as good as dead, but the Primato morale e civile degli Italiani will last as an important document of the times, and the Gesuita moderno is the most tremendous indictment of the Jesuits ever written. Balbo was an earnest student of history, and made history useful for politics. Like Gioberti in his first period, Balbo was zealous for the civil papacy, and for a federation of the Italian states presided over by it. His Sommario della storia d'Italia is an excellent epitome. Between the 19th and 20th century After the Risorgimento, political literature becomes less important. The first part of this period is characterized by two divergent trends of literature that both opposed Romanticism. The first trend is the Scapigliatura, that attempted to rejuvenate Italian culture through foreign influences, notably from the poetry of Charles Baudelaire and the works of American writer Edgar Allan Poe. The second trend is represented by Giosuè Carducci, a dominant figure of this period, fiery opponent of the Romantics and restorer of the ancient metres and spirit who, great as a poet, was scarcely less distinguished as a literary critic and historian. The influence of Émile Zola is evident in the Verismo. Luigi Capuana but most notably Giovanni Verga and were its main exponents and the authors of a verismo manifesto. Capuana published the novel Giacinta, generally regarded as the "manifesto" of Italian verismo. Unlike French naturalism, which was based on positivistic ideals, Verga and Capuana rejected claims of the scientific nature and social usefulness of the movement. Instead Decadentism was based mainly on the Decadent style of some artists and authors of France and England about the end of the 19th century. The main authors of the Italian version were Antonio Fogazzaro, Giovanni Pascoli, best known by his Myricae and Poemetti, and Gabriele D'Annunzio. Although differing stylistically, they championed idiosyncrasy and irrationality against scientific rationalism. Gabriele d'Annunzio produced original work in poetry, drama and fiction, of extraordinary quality. He began with some lyrics distinguished no less by their exquisite beauty of form than by their licence, and these characteristics reappeared in a long series of poems, plays and novels. Edmondo de Amicis is better known for his moral works and travels than for his fiction. Of the women novelists, Matilde Serao and Grazia Deledda became popular. Deledda was awarded the 1926 Nobel Prize in Literature for her works. Minor writers Giovanni Prati and Aleardo Aleardi continue romantic traditions. Other classical poets are Giuseppe Chiarini, Arturo Graf, Guido Mazzoni and Giovanni Marradi, of whom the two last named may perhaps be regarded as special disciples of Carducci. Enrico Panzacchi was at heart still a romantic. Olindo Guerrini (who wrote under the pseudonym of Lorenzo Stecchetti) is the chief representative of verismo in poetry, and, though his early works obtained a succès de scandale, he is the author of many lyrics of intrinsic value. Alfredo Baccelli and Mario Rapisardi are epic poets of distinction. Felice Cavallotti is the author of the stirring Marcia de Leonida. Among dialect writers, the great Roman poet Giuseppe Gioacchino Belli found numerous successors, such as Renato Fucini (Pisa) and Cesare Pascarella (Rome). Among the women poets, Ada Negri, with her socialistic Fatalità and Tempeste, achieved a great reputation; and others, such as Annie Vivanti, were highly esteemed in Italy. Among the dramatists, Pietro Cossa in tragedy, Ferdinando Martini, and Paolo Ferrari in comedy, represent the older schools. More modern methods were adopted by Giuseppe Giacosa. In fiction, the historical romance fell into disfavour, though Emilio de Marchi produced some good examples. The novel of intrigue was cultivated by Salvatore Farina. 20th century and beyond Important early-20th-century writers include Italo Svevo, the author of La coscienza di Zeno (1923), and Luigi Pirandello (winner of the 1934 Nobel Prize in Literature), who explored the shifting nature of reality in his prose fiction and such plays as Sei personaggi in cerca d'autore (Six Characters in Search of an Author, 1921). Federigo Tozzi was a great novelist, critically appreciated only in recent years, and considered one of the forerunners of existentialism in the European novel. Grazia Deledda was a Sardinian writer who focused on the life, customs, and traditions of the Sardinian people in her works. She has not gained much recognition as a feminist writer potentially due to her themes of women's pain and suffering. In 1926 she won the Nobel Prize for literature, becoming Italy's first and only woman recipient. Sibilla Aleramo (1876-1960) was born in Milan as Rina Faccio. Faccio published her first novel, Una Donna (A Woman) under her pen name in 1906. Today the novel is widely acknowledged as Italy's premier feminist novel. Her writing mixes together autobiographical and fictional elements. Maria Messina was a Sicilian writer who focused heavily on Sicilian culture with a dominant theme being the isolation and oppression of young Sicilian women. She achieved modest recognition during her life including receiving the Medaglia D’oro Prize for “La Mérica”. Anna Banti was born in Florence in 1895. She is most well known for her short story Il Coraggio Delle Donne (The Courage of Women) which was published in 1940. Her autobiographical work, Un Grido Lacerante, was published in 1981 and won the Antonio Feltrinelli prize. As well as being a successful author, Banti is recognized as a literary, cinematic, and art critic. Elsa Morante was born in Rome in 1912. She began writing at an early age and self-educated herself developing a love music and books. One of the central themes in Morante's works is narcissism. She also uses love as a metaphor in her works, saying that love can be passion and obsession and can lead to despair and destruction. She won the Premio Viareggio award in 1948. Alba de Céspedes was a Cuban-Italian writer from Rome. She was an anti-Fascist and was involved in the Italian Resistance. Her work was greatly influenced by the history and culture that developed around World War II. Although her books were bestsellers, Alba has been overlooked in recent studies of Italian women writers. Poetry was represented by the Crepuscolari and the Futurists; the foremost member of the latter group was Filippo Tommaso Marinetti. Leading Modernist poets from later in the century include Salvatore Quasimodo (winner of the 1959 Nobel Prize in Literature), Giuseppe Ungaretti, Umberto Saba, who won fame for his collection of poems Il canzoniere, and Eugenio Montale (winner of the 1975 Nobel Prize in Literature). They were described by critics as "hermeticists". Neorealism was developed by Alberto Moravia (e.g. Il conformista, 1951), Primo Levi, who documented his experiences in Auschwitz in Se questo è un uomo (If This Is a Man, 1947) and other books, Cesare Pavese (e.g. The Moon and the Bonfires (1949), Corrado Alvaro and Elio Vittorini. Dino Buzzati wrote fantastic and allegorical fiction that critics have compared to Kafka and Beckett. Italo Calvino also ventured into fantasy in the trilogy I nostri antenati (Our Ancestors, 1952–1959) and post-modernism in the novel Se una notte d'inverno un viaggiatore... (If on a Winter's Night a Traveller, 1979). Carlo Emilio Gadda was the author of the experimental Quer pasticciaccio brutto de via Merulana (1957). Pier Paolo Pasolini was a controversial poet and novelist. Giuseppe Tomasi di Lampedusa wrote only one novel, Il Gattopardo (The Leopard, 1958), but it is one of the most famous in Italian literature; it deals with the life of a Sicilian nobleman in the 19th century. Leonardo Sciascia came to public attention with his novel Il giorno della civetta (The Day of the Owl, 1961), exposing the extent of Mafia corruption in modern Sicilian society. More recently, Umberto Eco became internationally successful with the Medieval detective story Il nome della rosa (The Name of the Rose, 1980). Dacia Maraini is one of the most successful contemporary Italian women writers. Her novels focus on the condition of women in Italy and in some works she speaks to the changes women can make for themselves and for society. Aldo Busi is also one of the most important Italian contemporary writers. His extensive production of novels, essays, travel books and manuals provides a detailed account of modern society, especially the Italian one. He's also well known as a refined translator from English, German and ancient Italian. Children's literature Italy has a long history of children's literature. In 1634, the Pentamerone from Italy became the first major published collection of European folk tales. The Pentamerone contained the first literary European version of the story of Cinderella. The author, Giambattista Basile, created collections of fairy tales that include the oldest recorded forms of many well-known European fairy tales. In the 1550s, Giovanni Francesco Straparola released The Facetious Nights of Straparola. Called the first European storybook to contain fairy-tales, it eventually had 75 separate stories, albeit intended for an adult audience. Giulio Cesare Croce also borrowed from stories children enjoyed for his books. In 1883, Carlo Collodi wrote The Adventures of Pinocchio, which was also the first Italian fantasy novel. In the same year, Emilio Salgari, the man who would become "the adventure writer par excellence for the young in Italy" published for the first time his famous Sandokan. In the 20th century, Italian children's literature was represented by such writers as Gianni Rodari, author of Il romanzo di Cipollino, and Nicoletta Costa, creator of Julian Rabbit and Olga the Cloud. Women writers Italian women writers have always been underrepresented in academia. In many collections of prominent and influential Italian literature, women's works are not included. "A woman writer," Anna Banti once said, "even if successful, is marginalized. They will say that she is great among women writers, but they will not equate her to male writers." There has been an increase in the inclusion of women in academic scholarship in recent years, but representation is still inequitable. Italian women writers were first acknowledged by critics in the 1960s, and numerous feminist journals began in the 1970s, which increased readers' accessibility to and awareness of their work. The work of Italian women writers is both progressive and penetrating; through their explorations of the feminine psyche, their critiques of women's social and economic position in Italy, and their depiction of the persistent struggle to achieve equality in a "man's world," they have shattered traditional representations of women in literature. The page played an important role in the rise of Italian feminism, as it provided women with a space to express their opinions freely, and to portray their lives accurately. Reading and writing fiction became the easiest way for women to explore and determine their place in society. Italian war novels, such as Alba de Céspedes's Dalla parte di lei (The Best of Husbands, 1949), trace women's awakenings to political realities of the time. Subsequent psychological and social novels of Italian women writers examine the difficult process of growing up for women in Italian society and the other challenges they face, including achieving a socially satisfactory life and using intellectual aspirations to gain equality in society. Examples include Maria Messina's La casa nel vicolo (A House in the Shadows, 1989) and Laura Di Falco's Paura di giorno (Fear of the Day, 1954). After the public condemnation of women's abuse in Italian literature in the 1970s, women writers began expressing their thoughts about sexual difference in novels. Many Italian novels focus on facets of Italian identity, and women writers have always been leaders in this genre. References Bibliography Further reading Important German works, besides Adolf Gaspary, are those of Berthold Wiese and Erasmo Percopo (illustrated; Leipzig, 1899), and of Tommaso Casini (in Grober's Grundr. der rom. Phil., Strasbourg, 1896–1899). English students are referred to John Addington Symonds's Renaissance in Italy (especially, but not exclusively, vols. iv. and v.; new ed., London, 1902), and to Richard Garnett's History of Italian Literature (London, 1898). A Short History of Italian Literature, by J. H. Whitfield (1969, Pelican Books) Original texts and criticism De Sanctis, F., Storia della letteratura italiana. Napoli, Morano, 1870 Gardner, E. G., The National Idea in Italian Literature, Manchester, 1921 Momigliano, A., Storia della letteratura italiana. Messina-Milano, Principato, 1936 Sapegno, N., Compendio di storia della letteratura italiana. La Nuova Italia, 1936–47 Croce, B., La letteratura italiana per saggi storicamente disposti. Laterza, 1956–60 Russo, L., Compendio storico della letteratura italiana. Messina-Firenze, D'Anna, 1961 Petronio, G., Compendio di storia della letteratura italiana. Palermo, Palumbo, 1968 Asor Rosa, A., Sintesi di storia della letteratura italiana. Firenze, La Nuova Italia, 1986 AA.VV., Antologia della poesia italiana, ed. C. Segre and C. Ossola. Torino, Einaudi, 1997 De Rienzo, Giorgio, Breve storia della letteratura italiana. Milano, Tascabili Bompiani, 2006 [1997], Giudice, A., Bruni, G., Problemi e scrittori della letteratura italiana. Torino, 1973 Bruni F., Testi e documenti. Torino, UTET, 1984 Bruni, F. L'Italiano nelle regioni. Torino, UTET, 1997 Ferroni, G, Storia della letteratura italiana, Milano, Mondadori, 2006 External links Liber Liber(progetto Manuzio) Italian literature texts. www.StoriaDellaLetteratura.it - Storia della letteratura italiana (history of italian literature, full text, by Antonio Piromalli) Original and unabridged italian versions of italian literature Original Italian literature academic journal articles
6115
https://en.wikipedia.org/wiki/P%20versus%20NP%20problem
P versus NP problem
The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified can also be solved quickly. The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem is considered by many to be the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. Example Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven. History The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, where he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical) this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. Context The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: Is P equal to NP? Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believe P ≠ NP. These polls do not imply anything about whether P=NP is true, as stated by Gasarch himself: ″This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era.″ NP-completeness To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into an instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many such NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems have been shown to be NP-complete, and no fast algorithm for any of them is known. Based on the definition alone it is not obvious that NP-complete problems exist; however, a trivial and contrived NP-complete problem can be formulated as follows: given a description of a Turing machine M guaranteed to halt in polynomial time, does there exist a polynomial-size input that M will accept? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". Harder problems Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games. The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?" Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. Problems in NP not known to be in P or NP-complete In 1975, Richard E. Ladner showed that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time to factor an n-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. Does P mean "easy"? All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats. First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, thus rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H. On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. Reasons to believe P ≠ NP or P = NP Cook provides a restatement of the problem in THE P VERSUS NP PROBLEM as: Does P = NP ?. According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH. It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience. On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made: Consequences of solution One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. P = NP A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that a polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. An example of a field that could be upended by a solution showing P = NP is cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet. Symmetric ciphers such as AES or 3DES, used for the encryption of communications data. Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, the problem of finding a pre-image that hashes to a given value must be difficult in order to be useful, and ideally should require exponential time. However, if P = NP, then finding a pre-image M can be done in polynomial time, through reduction to SAT. These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P-NP inequivalence. On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable, it could spur considerable advances in life sciences and biotechnology. But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle. Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: P ≠ NP A proof that showed that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. Also P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. Results about difficulty of proof Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP: These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly-polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP. Claimed solutions While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger maintains a list that, as of 2016, contains 62 purported proofs of P = NP, 50 proofs of P ≠ NP, 2 proofs the problem is unprovable, and one proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted. Logical characterizations The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity. Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). Polynomial-time algorithms No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: // Algorithm that accepts the NP-complete language SUBSET-SUM. // // this is a polynomial-time algorithm if and only if P = NP. // // "Polynomial-time" means it returns "yes" in polynomial time when // the answer should be "yes", and runs forever when it is "no". // // Input: S = a finite set of integers // Output: "yes" if any subset of S adds up to 0. // Runs forever with no output otherwise. // Note: "Program number M" is the program obtained by // writing the integer M in binary, then // considering that string of bits to be a // program. Every possible program can be // generated this way, though most do nothing // because of syntax errors. FOR K = 1...∞ FOR M = 1...K Run program number M for K steps with input S IF the program outputs a list of distinct integers AND the integers are all in S AND the integers sum to 0 THEN OUTPUT "yes" and HALT If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first. Formal definitions P and NP Conceptually speaking, a decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is, where and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies the following two conditions: M halts on all inputs w and there exists such that , where O refers to the big O notation and NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of certificate and verifier. Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of "verifier" is defined as follows. Let L be a language over a finite alphabet, Σ. L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied: For all , such that (x, y) ∈ R and ; and the language over is decidable by a deterministic Turing machine in polynomial time. A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L. In general, a verifier does not have to be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time. Example Let Clearly, the question of whether a given x is a composite is equivalent to the question of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test. NP-completeness There are many equivalent ways of describing NP-completeness. Let L be a language over a finite alphabet Σ. L is NP-complete if, and only if, the following two conditions are satisfied: L ∈ NP; and any L in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied: There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w. Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete. Popular culture The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem. In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P=NP is seen shortly after Homer accidentally stumbles into the "third dimension". In the second episode of season 2 of Elementary, "Solve for X" revolves around Sherlock and Watson investigating the murders of mathematicians who were attempting to solve P versus NP. See also Game complexity List of unsolved problems in mathematics Unique games conjecture Unsolved problems in computer science Notes References Sources Further reading Online drafts External links Aviad Rubinstein's Hardness of Approximation Between P and NP, winner of the ACM's 2017 Doctoral Dissertation Award. 1956 in computing Computer-related introductions in 1956 Conjectures Mathematical optimization Millennium Prize Problems Structural complexity theory Unsolved problems in computer science Unsolved problems in mathematics
9531465
https://en.wikipedia.org/wiki/GoToMeeting
GoToMeeting
GoToMeeting (stylised as GoTo) is a web-hosted service created and marketed by LogMeIn. It is an online meeting, desktop sharing, and video conferencing software package that enables the user to meet with other computer users, customers, clients or colleagues via the Internet in real time. In late 2015, Citrix announced plans to spin off the GoToMeeting business as a standalone subsidiary with a market value around $4 billion. In July 2016, Citrix and LogMeIn announced plans to merge the GoTo family of products. Technology GoToMeeting is designed to broadcast the desktop view of a host computer to a group of computers connected to the host through the Internet. Transmissions are protected with high-security encryption and optional passwords. By combining a web-hosted subscription service with software installed on the host computer, transmissions can be passed through highly restrictive firewalls. History GoToMeeting was developed in July 2004 using the remote access and screen sharing technology from GoToMyPC and GoToAssist to allow web conferencing. The later release of GoToWebinar in 2006 and GoToTraining in 2010 expanded GoToMeeting capabilities to accommodate larger audiences. In February 2017, GoToMeeting became a product of LogMeIn as a result of a merger between LogMeIn and Citrix’s GoTo business. Editions and features GoToMeeting features include: Mobile apps for iPad, iPhone and Android devices Encryption and authentication security provided by a Secure Sockets Layer (SSL) Web site with end-to-end 128-bit Advanced Encryption Standard (AES) encryption and optional passwords Specific application sharing for showing only selected programs with attendees Multi-monitor support for a client PC Meeting recording and playback for recording and saving meetings to a user desktop for later review Total audio package provides toll based phone or conferencing via VoIP. GoToMeeting hosts up to 250, with Enterprise tier allowing for 3,000 attendees. Video conferencing HIPAA compliance In room solutions with GoToRoom and InRoom Link Awards 2016 Best Productivity App 2016, Appy Awards 2016 and 2017 Best Collaboration Solution, Codie awards A leader in Gartner's Magic Quadrant Competition GoToMeeting competes in a marketplace for web and video conferencing, where businesses and professionals can meet virtually. See also Comparison of web conferencing software Collaborative software References External links GoToMeeting Pricing GoToMeeting Blog Web conferencing Communication software
10748488
https://en.wikipedia.org/wiki/Bulk%20email%20software
Bulk email software
Bulk email software is software that is used to send emails in large quantities. Bulk email software usually refers to standalone software, while there are bulk email sending web-based services as well. Computer worms that spread themselves via email are an example of bulk email software, sometimes referred to as a mass mailer. Such worms usually (but not necessarily) send spoofed "From" headers. Types of software Most bulk email software programs are hosted by third party companies who sell access to their system. Customers pay per send or at a fixed monthly rate to have their own user account from which they can manage their contacts and send out email campaigns. Generally the advantage of this type of program is the reliability of the third party vendor and their application. Some bulk email software programs are self-hosted. The customer buys a license or develops their own program and then hosts the program. Generally the advantage of this type of program is the lack of ongoing monthly fees to the owner/developer of the program. The disadvantage to using this option is that delivery rate is reduced as often users use one server to send bulk emails. There are various settings to tweak to avoid a server being labeled as spam. References Email
39779502
https://en.wikipedia.org/wiki/JoAnne%20Yates
JoAnne Yates
JoAnne Yates (born 1951) is an American organizational theorist and Professor of Management at the MIT Sloan School of Management, known for her study of communication and information systems in organizations. Biography Yates received her BA from Texas Christian University in 1974, and her M.A. and later her Ph.D. in 1980 from the University of North Carolina. In the 1980 she started her academic career at the Massachusetts Institute of Technology, Cambridge, MA, where she started working with Professor Wanda Orlikowski. In the summers 1989, 1990, 1992 and 1995 she has been Visiting Professor at the Helsinki School of Economics. Since 1999 she is Sloan Distinguished Professor of Management at the MIT Sloan School of Management. From 2007 to 2012 she had been Deputy Dean of the Sloan School of Management. Yates' research focuses on the use of communication and information technology in business. Her aim is to understand "how the use of communication and information technology within firms shapes and is shaped over time by its changing organizational, managerial, and technological contexts." Publications Yates has authored and co-authored numerous publications in her field of expertise. Selected books 1984. Graphs as a managerial tool : a case study of Du Pont's use of graphs, 1904-1949 1984. Internal communication systems in American business structures : a framework to aid appraisal 1985. Structural effects of communication technologies on firms : lessons from the past 1993. Control Through Communication: The Rise of System in American Management. Johns Hopkins University Press 1998. Proceedings of the International Working Conference on Information and Process Integration in Enterprises. With Toshiro Wakayama, Srikanth Kannapan, Chan Meng Khoong and Sham Navathe, eds. Kluwer Press 2001. IT and Organizational Transformation: History, Rhetoric, and Practice. With John Van Maanen eds., Sage Publications. 2005. Structuring the Information Age: Life Insurance and Information Technology in the 20th Century. Johns Hopkins University Press. Selected articles Malone, Thomas W., Joanne Yates, and Robert I. Benjamin. "Electronic markets and electronic hierarchies." Communications of the ACM 30.6 (1987): 484-497. Yates, JoAnne, and Wanda J. Orlikowski. "Genres of organizational communication: A structurational approach to studying communication and media." Academy of management review 17.2 (1992): 299-326. Orlikowski, Wanda J., and JoAnne Yates. "Genre repertoire: The structuring of communicative practices in organizations." Administrative science quarterly (1994): 541-574. Orlikowski, W. J., Yates, J., Okamura, K., & Fujimoto, M. "Shaping electronic communication: the metastructuring of technology in the context of use." Organization science 6.4 (1995): 423-444. References External links JoAnne Yates, Web site at MIT 1951 births Living people American business theorists American women computer scientists American computer scientists Texas Christian University alumni University of North Carolina alumni MIT Sloan School of Management faculty American women academics 21st-century American women
27138619
https://en.wikipedia.org/wiki/Hacker-Craft
Hacker-Craft
Hacker-Craft is the brand name for classic mahogany boats built by The Hacker Boat Co. It is an American company, founded in Detroit, Michigan in 1908 by John Ludwig Hacker (1877–1961, known as John L. Hacker or just "John L.") and is the oldest continuous builder of wooden motorboats in the world. The company moved operations to New York State in the 1970s and continues to produce hand-built boats at the corporate headquarters in Queensbury, NY, near Lake George, with storage facilities and a marina lakeside in Silver Bay, New York. US(866) 540 -5546; www.hackerboat.com Hacker was a naval architect and American motorboat designer. His major design and engineering accomplishments include the invention of the "V"-hull design and the floating biplane for the Wright brothers. The company was known for its runabouts, utilities, commuters, and yacht tenders. John L. Hacker, the early years Hacker was born in Detroit, Michigan on May 24, 1877. For four years, while working at his father's business as a bookkeeper, he attended night school and took a correspondence course in order to become an accredited marine designer. Once qualified (at the age of 22) he set about solving a number of problems that inhibited speed and performance in motor boats. Pleasure boats of the 1900 era were narrow, round bottomed launches that plowed through the water instead of planing over it as boats do nowadays. Hacker's first major task in boat design was to try to solve the problem of "squatting", which occurred with all the canoe-stern shaped powerboats of the 1900s. His theory was that if his boats were going to go fast, they would have to "plane" rather than plow through the water, but the tendency to plane was considered an unsafe mode that was to be avoided. Nonetheless, he built a test craft to prove his new theories—a runabout. The boat's propeller and rudder were mounted under the transom and a strut was used to position the propeller shaft. The boat also featured Hacker's revolutionary "V"-hull design, which produced stunning speed and efficiency at low horsepower. In 1904, he designed Au Revoir, the fastest boat in the world at the time, and on August 12, 1908, building on this success, he founded the Hacker Boat Company in Detroit, upon purchasing and renaming an existing firm, the Detroit Launch and Power Company. Coincidentally, this was the same day on which the first ever Model T Ford automobile was produced by his friend, Henry Ford. Hacker's designs led to many advances that today's boat owners take for granted. His combination of design flair and engineering brilliance led him to create the shape and style that was to become the signature look of American speedboats. The names of his designs include Pardon Me, the Minute Man, Thunderbird, El Lagarto, Bootlegger, Peerless, Dolphin, Kitty Hawk, Tempo VI, the Belle Isle Bear Cats, Lockpat II, and My Sweetie. The birth of speed In 1911, Hacker designed and installed two floats on the Wright Brothers' biplane so that it could take off and land on water. This was the first use of twin floats on an aircraft. In the same year he designed Kitty Hawk, the first successful step hydroplane that exceeded the then-unthinkable speed of over and was then the fastest boat in the world. There followed a succession of Kitty Hawks, each building on the success of its predecessor and in the process breaking four sea-speed records. In 1914, Hacker moved to Detroit and the Hacker Boat Company opened at 323 Crane Avenue. His runabout designs for Gregory's Belle Isle Boat & Engine Company were soon to bring success to the firm. The boats, called Belle Isle Bear Cats, proved popular with prominent owners like J.W. Packard and Henry Ford. The company was thriving and in 1921, Hacker decided it was time to open a satellite facility in Mount Clemens, Michigan. Two years later, he moved the entire boat building operation from Detroit to Mount Clemens. Hacker Boat Company flourishes The boat works on the Clinton River in Mount Clemens continued to expand, and by 1928 they provided of floor space for the handcrafting of fine mahogany runabouts. That year the influential "Pageant of Progress" reported that the Hacker Boat Company employed sixty-eight men with demand growing rapidly. Sales for that year were $450,000 (about ($7,398,631 in 2022 USD). Along with a handful of others, such as Gar Woods and Chris-Crafts, Hacker's gleaming mahogany runabouts captured the public imagination with their elegant design and breakneck speed and in the process quickly became the must-have play thing for the rich and famous. In 1930 the King of Siam ordered a custom-built Landau-top runabout powered with an Packard engine. Only four authorized dealers offered Hacker boats to the public during this period. The company did most of its business through factory direct orders from the customer, and excelled in custom-built craft. 1930–1959 The Great Depression had a devastating effect on the pleasure-boat market. In 1935 Hacker took on a business partner, John Mcready, who quickly assumed day-to-day control of the company. Meanwhile, Hacker focused on designing boats in what was to become his golden period: he was responsible for a remarkable number of racing winners including El Lagarto (which won the Gold Cup in 1933, 1934 and 1935), Scotty, Scotty Too, My Sweetie, Miss Pepsi —all record breakers. In all, Hacker was responsible for over twenty water speed records, five Gold Cup winners, four President's Race winners and numerous other speed trials and racing victories. In Mount Clemens, Hacker Boat Company rebounded from the Depression with popular "utility" Hacker-Craft runabouts priced for the ordinary consumer. In 1935, the utility could be purchased for $975 (($20,008 in 2022 USD). In 1939 Hacker was commissioned by property tycoon George Whitell to build what was to be one of his masterpieces and is now a national historic treasure, a commuter called Thunderbird, which was commemorated on a postage stamp in 2007 by the U.S. Postal Service . In 1952, Hacker Boat was awarded a government contract for the construction of 25 ocean-going picket boats for the U.S. Navy and 112 crash boats, sedan utility boats, and target boats. Hacker's designs included patrol boats, air-sea rescue boats, and cruisers. Hacker Boat Company revitalized In 1959 William Morgan of Morgan Marine, a boat-builder from Silver Bay, on Lake George, New York, acquired the Hacker-Craft name with the aim of revitalizing the company's historic legacy. Morgan Marine made significant structural and engineering modifications. The use of new technologies allowed for improvements of the boats in a few subtle but important ways. Where Hacker had used engines, Morgan Marine was able to power these hulls with or Crusader engines that could exceed 50 miles per hour. To accommodate these improved engines, Morgan Marine had to make the supports stronger than those used in the original designs. Morgan's most notable accomplishment, however, was improving the handling of the boats. If the old Hackers were run at top speeds, the bow could rise and block the driver's view. By reworking the bottom design, Morgan ensured that the boat would plane better/remain more level in the water. 1959-present Additional improvements were necessary to modernize the boats while retaining their classic look. Many of the original jigs had been lost, and, where necessary, they made new sand castings for the hardware. While retaining most of the traditional craftsmanship, Morgan made several significant improvements over the old plans, as concessions to the advancement of technology: dual exhausts, rather than the original single exhaust, for enhanced engine performance; the use of stainless steel fittings and hardware throughout so that pitting was no longer a problem; state-of-the-art epoxy encapsulation and bonding techniques; triple planked bottoms completely encased in epoxy; double planked sides and deck saturated in epoxy; 25% more frames; double the number of floor timbers; up to 18 coats of varnish; the use of renewable-resourced Honduras mahogany; new improved steering for more maneuverability; and laminated windshields with either blue or green tints. The product line consisted of three models: 26 foot Gentleman's Racer; 26 foot and 30 foot triple-cockpit Runabout. Each could be customized to the buyer’s wishes. Approximately 15 of the modern classics were built in 1983. The 3 models produced were a 26 foot gentleman's roadster, a 26 foot three-cockpit runabout, and a 30 foot three-cockpit runabout. In 2004 Robert Wagemann bought the company from William Morgan. In 2008 George Badcock's Erin Investments acquired a majority interest in the company from Robert Wagemann. Subsequently, the company opened a 32,000-square-foot production facility in Ticonderoga, moving production there from Silver Bay in order to increase capacity. In 2011 Erin Investments attained complete ownership of the company. In addition to the new production facility, the company's headquarters campus on Lake George, also the site of a full service marina, was upgraded, with a new showroom capable of displaying four boats. The headquarters offices and all storage and service buildings were repaired/refurbished. The campus was landscaped and parking areas upgraded. In 2011 Hacker-Craft were seen in TV and catalog shoots for Marks & Spencer (UK), Tommy Hilfiger and Nautica. In 2011 the custom designed Neiman Marcus Edition Hacker-Craft was selected as a “fantasy gift” in the legendary Neiman Marcus Christmas Book catalog. In 2012 a partnership with Tommy Bahama resulted in a custom designed model, the Tommy Bahama Edition Hacker-Craft (TBE). In 2015 the company introduced a limousine yacht tender, and appointed the Sierra Boat Company, on Lake Tahoe, Nevada, and Wawasee Boat Company, of Syracuse, Indiana, as an authorized representative. In 2017 the company appointed Thailand-based Classic Boats Lifestyle, Ltd. (CBL) as an official stocking dealer. In 2019, a Hacker Craft boat appeared in a season two episode of Netflix's Ozark. In 2021 the company appointed Maryland-based Tomes Landing Marina as an official stocking dealer. In 2021 the company appointed Wisconsin-based Pandora Boat Company as an official stocking dealer. In 2021 the company relocated its corporate offices, production and restoration operations from Ticonderoga to Queensbury, NY, to better attract new employees and be more convenient to prospective customers to view and discuss purchasing a Hacker-Craft. The new space was 40% larger than the former factory, to better enable the building of custom boats and the rapidly growing restoration business. Plans include building a showroom on the site. In 2021, in conjunction with the move to the new location, the company introduced a new line of models, the Evolution Collection. The line includes the 37’ 5” Commuter, the 30’-35’ Center Console and the 40’ Monaco; each customizable. The Evolution Collection transformed Hacker from producing solely traditional designs to also building models with an entirely new look for the brand. The line’s three unique models defined a new era for Hacker and helped the company enter new markets and attract buyers looking for innovative designs and cutting-edge technology, yet with the panache of the Hacker-Craft brand. References Further reading External links The Mariners' Museum, John L. Hacker Collection—the largest single archive of material relating to John Hacker and Hacker-Craft Motorboats American shipbuilders Manufacturing companies established in 1908 Privately held companies based in New York (state) 1908 establishments in Michigan
1157278
https://en.wikipedia.org/wiki/Battlefield%20Line%20Railway
Battlefield Line Railway
The Battlefield Line Railway is a heritage railway in Leicestershire, England. It runs from Shackerstone (Grid ref ) to Shenton (), via Market Bosworth, a total of . Shenton is near Bosworth Field, (the location of the final battle of the Wars of the Roses immortalised in Shakespeare's Richard III), giving the railway its name. Overview The railway runs steam and diesel-hauled trains every weekend and Bank Holiday from March to December, as well as a summer mid-week service on Tuesday, Wednesdays, Thursdays in July and August and Wednesdays in September; the latter is operated by the Heritage diesel railcar service. Special events:Christmas Santa Specials and others throughout the year. History The railway used to be part of the London and North Western Railway and the Midland Railway, who operated the line jointly between Moira West Junction and Nuneaton. The first trains ran along this section in 1873. At Shackerstone station, there was once a junction where one section branched off towards Moira and Ashby and the other went towards Coalville Junction. In 1883, the Charnwood Forest Railway was opened, which extended the branch from Coalville Junction to Loughborough's Derby Road station, passing through the villages of Whitwick and Shepshed. In the 1923 Grouping, these lines were assigned to the London Midland and Scottish Railway. In 1931 the last scheduled passenger train went down the Charnwood Forest branch, with the line then only being open to freight and excursions until the 1960s. The Coalville Junction – Shackerstone section was dismantled and closed completely in 1964. The Ashby – Nuneaton line had its last passenger service in 1965, which was an enthusiasts' special, before British Rail pulled the rails up in 1970. In its heyday, Shackerstone was a busy station, with steam trains doing the workings between Ashby and Nuneaton, whilst a railcar did the service between Shackerstone and Loughborough Derby Road. The line was originally double track but was later singled. Confusingly, part of the line was called the Bluebell Line (the Charnwood Forest Line, Hugglescote to Loughborough Derby Road station; this line was only accessible via the ANJR). The royal train now in the National Railway Museum went to Shackerstone on its first outing in December 1902. It conveyed King Edward VII, Queen Alexandra and Princess Victoria on their way to Gopsall Hall, where Handel is reputed to have composed his oratorio Messiah. Renovation project The Shackerstone Railway Society was set up in 1969 at Market Bosworth, but soon moved to Shackerstone in 1970, as they needed a proper home for their first steam engine. When they got to Shackerstone they found one through line still intact, and their first aim was to build some sidings. Later they reinstated the "down" platform and connected the sidings to the line to Market Bosworth. In 1973, to celebrate the centenary of the line, a small train of open wagons was hauled to Market Bosworth. Following the successful conclusions of the negotiations with British Rail, a start was made on track rearrangements which created run-round loops at both ends of the line and a number of sidings at Shackerstone. In the 1980s, the Battlefield Line launched a campaign to extend their line to Shenton. This involved buying of track and in 1992 after a successful campaign, the inaugural service arrived, hauled by the appropriately named 0-6-0 tank engine "Richard III." Journey The first section of the journey travelling south from is a climbing gradient which continues until the train is clear of the station limits. The signal box on the left is the oldest Midland Railway Co. type one box still in operational use. The train then passes under the first bridge which carries the road to Barton-in-the-Beans, and into open farmland. is from Shackerstone. There are usually stored locomotives or wagons here. To the right can be seen the old buildings and signal box which used to control part of the operation of the station. South of Market Bosworth station, the train passes Aqueduct Cottage and the Ashby Canal aqueduct beyond it. Trains slow as they cross the road bridge between Shenton and Sutton Cheney. As the line curves to the right, the train approaches the terminus at , just over away from Shackerstone. The station pottery is the only surviving part of the original station. The present station is the reconstructed Humberstone Road station from Leicester. At the end of the line is a headshunt underneath an old cattle bridge. The small bridge was previously used to allow safe passage of farm traffic over the original railway. Steam locomotives Visiting locos Diesel shunters Ex. mainline heritage diesels Diesel multiple units Electric locomotives Coach stock British Railways Mark 1 coaches The original A&NJR closed long before the formation of British Railways, but as very few suitable period carriages were preserved, BR Mark 1 coaches form part of all Battlefield Line passenger trains today. They are a renowned design of standardised rail stock, being both durable and high-capacity vehicles. References External links Battlefield Line Official Web Heritage railways in Leicestershire Railway lines opened in 1873
69223559
https://en.wikipedia.org/wiki/Crab%20Game
Crab Game
Crab Game is a free-to-play video game developed and published by Norwegian indie developer Daniel Sooman, commonly known as Dani. The game was released for Microsoft Windows on Steam and for macOS and Linux on Itch.io on 29 October 2021; the macOS and Linux editions were later released on Steam on 16 November. Based on the Netflix series Squid Game, players compete with each other in a wide range of minigames in order to be the last one alive. Gameplay Crab Game is a first-person multiplayer party game where players compete in various minigames based around childhood games. The player must avoid dying and be the last one remaining in order to win a cash prize; however, the game ends if there is nobody left. Players can attack others with various items, compete in various maps and game modes and communicate with each other through proximity chat. Players can also create servers with up to 50 players or join existing ones. While the game initially featured primarily Squid Game-inspired minigames, a series of content updates have expanded the game to have a variety of game modes and maps unique to Crab Game, with no discernible connection to the series. Development and release Crab Game was initially created in two weeks in response to Squid Game popularity and was named as such to avoid a cease and desist letter from Netflix. On 29 October 2021, Crab Game was released on Steam for Microsoft Windows. The game was released on Itch.io for macOS and Linux as Dani was unsure of their stability due to not being able to test them; said editions were later released on Steam on 16 November. Since its release, Crab Game has been receiving content updates consisting of new maps, more games, and better optimization for slower computers or internet speeds. Reception Crab Game was well-received upon its initial release, reaching an all-time peak of over 56,000 players on Steam and over 211,000 viewers on Twitch. The game also quickly gained exposure on YouTube. On 2 November 2021, Twitch streamer xQc experienced a DDoS attack while playing Crab Game, causing him to disconnect from the internet. Various Twitch streamers also experienced DDoS attacks, such as Sodapoppin and Nick Polom. The issues were confirmed by developer Dani to be caused by the network the game was running on, which made the IP addresses of players public; he urged players to not join any public lobbies to prevent any DDoS attacks. The issue has since been fixed. Awards References Video games based on television series 2021 video games Video games developed in Norway Windows games Free-to-play video games Indie video games Multiplayer online games MacOS games First-person video games Linux games Squid Game Party video games
2639573
https://en.wikipedia.org/wiki/Sega%20Genesis
Sega Genesis
The Sega Genesis, known as the outside North America, is a 16-bit fourth-generation home video game console developed and sold by Sega. The Genesis was Sega's third console and the successor to the Master System. Sega released it in 1988 in Japan as the Mega Drive, and in 1989 in North America as the Genesis. In 1990, it was distributed as the Mega Drive by Virgin Mastertronic in Europe, Ozisoft in Australasia, and Tec Toy in Brazil. In South Korea, it was distributed by Samsung as the Super Gam*Boy and later the Super Aladdin Boy. In Russia, it was distributed by Forrus. Designed by an R&D team supervised by Hideki Sato and Masami Ishikawa, the Genesis was adapted from Sega's System 16 arcade board, centered on a Motorola 68000 processor as the CPU, a Zilog Z80 as a sound controller, and a video system supporting hardware sprites, tiles, and scrolling. It plays a library of more than 900 games on ROM-based cartridges. Several add-ons were released, including a Power Base Converter to play Master System games. It was released in several different versions, some created by third parties. Sega created two network services to support the Genesis: Sega Meganet and Sega Channel. In Japan, the Mega Drive fared poorly against its two main competitors, Nintendo's Super Famicom and NEC's PC Engine, but it achieved considerable success in North America, Brazil, and Europe. Contributing to its success was its library of arcade game ports, the popularity of Sega's Sonic the Hedgehog series, several popular sports franchises, and aggressive youth marketing that positioned it as the cool console for adolescents. The 1991 North American release of the Super Nintendo Entertainment System triggered a fierce battle for market share in the United States and Europe known as the "console war". This drew attention to the video game industry, and the Genesis and several of its games attracted legal scrutiny on matters involving reverse engineering and video game violence. Controversy surrounding violent games such as Night Trap and Mortal Kombat led Sega to create the Videogame Rating Council, a predecessor to the Entertainment Software Rating Board. 30.75 million first-party Genesis units were sold worldwide. In addition, Tec Toy sold an estimated three million licensed variants in Brazil, Majesco projected it would sell 1.5 million licensed variants of the system in the United States and smaller numbers were sold by Samsung in South Korea. By the mid-2010s, licensed third-party Genesis rereleases were still being sold by AtGames in North America and Europe. Many games have been re-released in compilations or on online services such as the Nintendo Virtual Console, Xbox Live Arcade, PlayStation Network, and Steam. The Genesis was succeeded in 1994 by the Sega Saturn. History Development In the early 1980s, Sega Enterprises, Inc. – then a subsidiary of Gulf+Western – was one of the top five arcade game manufacturers active in the United States, as company revenues surpassed $200 million between July 1981 and June 1982. A downturn in the arcade business starting in 1982 seriously hurt the company, leading Gulf+Western to sell its North American arcade manufacturing organization and the licensing rights for its arcade games to Bally Manufacturing. The company retained Sega's North American R&D operation, as well as its Japanese subsidiary, Sega Enterprises, Ltd. With its arcade business in decline, Sega Enterprises, Ltd. president Hayao Nakayama advocated that the company leverage its hardware expertise to move into the home console market in Japan, which was in its infancy at the time. Nakayama received permission to proceed with this project, leading to the release of Sega's first home video game system, the SG-1000, in July 1983. While it had sold 160,000 units in Japan, far exceeding Sega's expectations, sales at stores were dominated by Nintendo's Famicom which had been released the same day. Sega estimated that the Famicom outsold the SG-1000 by a 10-to-1 margin. The SG-1000 was replaced by the Sega Mark III within two years. In the meantime, Gulf+Western began to divest itself of its non-core businesses after the death of company founder Charles Bluhdorn, so Nakayama and former Sega CEO David Rosen arranged a management buyout of the Japanese subsidiary in 1984 with financial backing from CSK Corporation, a prominent Japanese software company. Nakayama was then installed as CEO of Sega Enterprises, Ltd. In 1986, Sega redesigned the Mark III for release in North America as the Master System. This was followed by a European release the next year. Although the Master System was a success in Europe, and later in Brazil, it failed to ignite significant interest in the Japanese or North American markets, which, by the mid-to-late 1980s, were both dominated by Nintendo. With Sega continuing to have difficulty penetrating the home market, Sega's console R&D team, led by Masami Ishikawa and supervised by Hideki Sato, began work on a successor to the Master System almost immediately after that console launched. In 1987, Sega faced another threat to its console business when Japanese computer giant NEC released the PC Engine amid great publicity. To remain competitive against the two more established consumer electronics companies, Ishikawa and his team decided they needed to incorporate a 16-bit microprocessor into their new system to make an impact in the marketplace and once again turned to Sega's strengths in the arcade industry to adapt the successful Sega System 16 arcade board into architecture for a home console. The decision to use a Motorola 68000 as the system's main CPU was made late in development, while a Zilog Z80 was used as a secondary CPU to handle the sound due to fears that the load to the main CPU would be too great if it handled both the visuals and the audio. The 68000 chip was expensive and would have driven the retail price of the console up greatly, but Sega was able to negotiate with a distributor for a tenth of its price on an up-front volume order with the promise of more orders pending the console's future success. The appearance of the Mega Drive was designed by a team led by Mitsushige Shiraiwa that drew inspiration from audiophile equipment and automobiles. Shiraiwa said this more mature look helped to target the Mega Drive to all ages, unlike the Famicom, which was aimed primarily at children. According to Sato, the Japanese design for the Mega Drive was based on the appearance of an audio player, with "16-bit" embossed in a golden metallic veneer to create an impression of power. The console was announced in the June 1988 issue of Japanese gaming magazine Beep! as the Mark V, but Sega management wanted a stronger name. After reviewing more than 300 proposals, the company settled on "Mega Drive". In North America, the name was changed to "Genesis". Rosen said he insisted on the name as he disliked "Mega Drive" and wanted to represent "a new beginning" for Sega. Sato said some design elements changed, such as the gold-colored "16-bit" wording, because it was believed that the color would be mistaken for yellow. He believes that the changes in design are representative of the differences in values between Japanese and American culture. Launch Sega released the Mega Drive in Japan on October 29, 1988, though the launch was overshadowed by Nintendo's release of Super Mario Bros. 3 a week earlier. Positive coverage from magazines Famitsu and Beep! helped to establish a following. Within two days of release, the console's initial production run sold out. However, Sega only managed to ship 400,000 units in the first year. In order to increase sales, Sega released various peripherals and games, including an online banking system and answering machine called the Sega Mega Anser. Nevertheless, the Mega Drive was unable to overtake the venerable Famicom and remained a distant third in Japan behind Nintendo's Super Famicom and NEC's PC Engine throughout the 16-bit era. Sega announced a North American release date for the system on January 9, 1989. At the time, Sega did not possess a North American sales and marketing organization and was distributing its Master System through Tonka. Dissatisfied with Tonka's performance, Sega looked for a new partner to market the Genesis in North America and offered the rights to Atari Corporation, which did not yet have a 16-bit system. David Rosen made the proposal to Atari CEO Jack Tramiel and the president of Atari's Entertainment Electronics Division, Michael Katz. Tramiel declined to acquire the new console, deeming it too expensive, and instead opted to focus on the Atari ST. Sega decided to launch the console through its own Sega of America subsidiary, which executed a limited launch on August 14, 1989, in New York City and Los Angeles. The Genesis was released in the rest of North America later that year. The European version of the Mega Drive was released in September 1990, at a price of , i.e. . The release was handled by Virgin Mastertronic, which was later purchased by Sega in 1991 and became Sega of Europe. Games like Space Harrier II, Ghouls 'n Ghosts, Golden Axe, Super Thunder Blade, and The Revenge of Shinobi were available in stores at launch. The console was also bundled with Altered Beast. The Mega Drive and its first batch of games were shown at the 1990 European Computer Entertainment Show (ECES) in Earl's Court. Between July and August 1990, Virgin initially placed their order for 20,000 Mega Drive units. However, the company increased the order by 10,000 units when advanced orders had exceeded expectations, and another 10,000 units was later added following the console's success at the ECES event. The projected number of units to be sold between September and December 1990 had eventually increased to 40,000 units in the United Kingdom alone. Other companies assisted in distributing the console to various countries worldwide. Ozisoft handled the Mega Drive's launch and marketing in Australia, as it had done before with the Master System. In Brazil, the Mega Drive was released by Tectoy in 1990, only a year after the Brazilian release of the Master System. Tectoy produced games exclusively for the Brazilian market, and brought the Sega Meganet online service there in 1995. Samsung handled sales and distribution in Korea, where it was named Super Gam*Boy and retained the Mega Drive logo alongside the Samsung name. It was later renamed Super Aladdin Boy. In India, Sega entered a distribution deal with Shaw Wallace in April 1994 in order to circumvent an 80% import tariff, with each unit selling for INR₹18,000. In Russia, Sega officially licensed the console to local distributor Forrus in 1994, replaced in 1996 by Bitman. That year, the video game console market generated between and in Russia, with Sega accounting for half of all console sales in the country. However, only about 15% of the sales were official Sega units distributed by Bitman, while the rest were unofficial counterfeit clones. North American sales and marketing For the North American market, former Atari Corporation Entertainment Electronics Division president and new Sega of America CEO Michael Katz instituted a two-part approach to build sales. The first part involved a marketing campaign to challenge Nintendo head-on and emphasize the more arcade-like experience available on the Genesis, with slogans including "Genesis does what Nintendon't". Since Nintendo owned the console rights to most arcade games of the time, the second part involved creating a library of recognizable games which used the names and likenesses of celebrities and athletes, such as Pat Riley Basketball, Arnold Palmer Tournament Golf, James 'Buster' Douglas Knockout Boxing, Joe Montana Football, Tommy Lasorda Baseball, Mario Lemieux Hockey, and Michael Jackson's Moonwalker. Nonetheless, Sega struggled to overcome Nintendo's presence in consumers' homes. Tasked by Nakayama to sell one million units within the first year, Katz and Sega of America sold only 500,000. At the Winter Consumer Electronics Show (Winter CES) in January 1990, the Sega Genesis demonstrated a strong line-up of games which received a positive reception for approaching arcade-quality graphics and gameplay as well as for providing non-arcade experiences such as Phantasy Star II. In mid-1990, Nakayama hired Tom Kalinske to replace Katz as CEO of Sega of America. Although Kalinske knew little about the video game market, he surrounded himself with industry-savvy advisors. A believer in the razor and blades model, he developed a four-point plan: cut the price of the console, create an American team to develop games targeted at the American market, expand the aggressive advertising campaigns, and replace the bundled game Altered Beast with a new game, Sonic the Hedgehog. The Japanese board of directors initially disapproved of the plan, but all four points were approved by Nakayama, who told Kalinske, "I hired you to make the decisions for Europe and the Americas, so go ahead and do it." Critics praised Sonic as one of the greatest games yet made, and Genesis sales increased as customers who had been waiting for the release of the international version of Nintendo's Super Famicom, the Super Nintendo Entertainment System (SNES), decided to purchase a Genesis instead. The SNES debuted against an established competitor, while NEC's TurboGrafx-16 failed to gain traction, and NEC soon pulled out of the market. In large part due to the popularity of Sonic the Hedgehog, the Genesis outsold the SNES in the United States nearly two to one during the 1991 holiday season. Sega controlled 65% of the 16-bit console market in January 1992, the first time Nintendo had not been the console leader since 1985. The Genesis outsold the SNES for four consecutive Christmas seasons due to its two-year lead, lower price point, and larger game library compared to the SNES at its release. Sega had ten games for every game on SNES, and while the SNES had an exclusive version of Final Fight, one of Sega's internal development teams created Streets of Rage, which had bigger levels, tougher enemies, and a well-regarded soundtrack. ASCII Entertainment reported in early 1993 that Genesis had 250 games versus 75 for the SNES, but limited shelf space meant that stores typically offered 100 Genesis and 50 SNES games. The NES was still the leader, with 300 games and 100 on shelves. Sega's advertising positioned the Genesis as the cooler console, and coined the term blast processing, an obscure and unused graphics programming method, to suggest that its processing capabilities were far greater than those of the SNES. A Sony focus group found that teenage boys would not admit to owning an SNES rather than a Genesis. With the Genesis often outselling the SNES at a ratio of 2:1, Nintendo and Sega focused heavily on impression management of the market, even going to the point of deception; Nintendo claimed it had sold more consoles in 1991 than it actually had, and forecasted it would sell 6 million consoles by the end of 1992, while its actual U.S. install base at the end of 1992 was only just more than 4 million units. Due to these tactics, it was difficult to ascertain a clear leader in market share for several years at a time, with Nintendo's dollar share of the U.S. 16-bit market dipping down from 60% at the end of 1992 to 37% at the end of 1993, Sega claiming 55% of all 16-bit hardware sales during 1994, and Donkey Kong Country helping the SNES to outsell the Genesis from 1995 through 1997. According to a 2004 study of NPD sales data, the Genesis maintained its lead over the Super NES in the American 16-bit console market. However, according to a 2014 Wedbush Securities report based on revised NPD sales data, the SNES outsold the Sega Genesis in the U.S. market by units. Electronic Arts To compete with Nintendo, Sega was more open to new types of games, but still tightly controlled the approval process for third-party games and charged high prices for cartridge manufacturing. American third-party publisher Electronic Arts (EA) sought a better deal, but had met resistance from Sega. They decided to reverse-engineer the Genesis, using a clean-room method similar to the method Phoenix Technologies had used to reverse-engineer the IBM Personal Computer BIOS around 1984. The process began in 1989, led by Steve Hayes and Jim Nitchals. They created a controlled room in EA headquarters nicknamed "Chernobyl", to which only one person was allowed access, Mike Schwartz. Schwartz reviewed Sega's copyrighted development manuals and tools, studied the Genesis hardware and games, and wrote original documentation that summarized his findings. The process took him about a month. His work was reviewed by EA's lawyers before being disseminated to Hayes and Nitchals to verify its originality, and subsequently to the rest of the developers to let them build games. After a few months, EA began developing for the Genesis in earnest. EA founder Trip Hawkins confronted Nakayama the day before the 1990 Consumer Electronics Show (CES), informing him that EA had the ability to run its own licensing program if Sega refused to meet its demands. Sega relented, and the next day EA's upcoming Genesis games were showcased at CES. EA signed what Hawkins described as "a very unusual and much more enlightened license agreement" with Sega in June 1990: "Among other things, we had the right to make as many titles as we wanted. We could approve our own titles ... the royalty rates were a lot more reasonable. We also had more direct control over manufacturing." After the deal was in place, EA chief creative officer Bing Gordon learned that "we hadn't figured out all the workarounds" and "Sega still had the ability to lock us out ... It just would have been a public relations fiasco." EA released its first Genesis games, Populous and Budokan: The Martial Spirit, within the month. The first Genesis version of EA's John Madden Football arrived before the end of 1990, and became what Gordon called a "killer app". Taking advantage of the licensing agreement, Gordon and EA's vice president of marketing services Nancy Fong created a visual identifier for EA's Genesis cartridges: a yellow tab molded into the cartridge casing. Sonic the Hedgehog Sega held a company-wide contest to create a mascot character to compete with Nintendo's Mario series. The winning submission was a blue hedgehog with red shoes, Sonic, created by Naoto Ohshima, spawning one of the best-selling video game franchises in history. The gameplay of Sonic the Hedgehog originated with a tech demo created by Yuji Naka, who had developed a prototype platform game that involved a fast-moving character rolling in a ball through a long winding tube. This concept was developed with Ohshima's character design and levels conceived by designer Hirokazu Yasuhara. Although Katz and Sega of America's marketing experts disliked Sonic, certain that it would not catch on with American children, Kalinske's strategy to place Sonic the Hedgehog as the pack-in game paid off. Sonic the Hedgehog greatly increased the popularity of the Genesis in North America, and the bundle is credited with helping Sega gain 65% of the market share against Nintendo. Similarly in Europe, Sega had captured a 65% share of the European console market, where the Mega Drive maintained its lead over the SNES through 1994. Trademark Security System and Sega v. Accolade After the release of the Genesis in 1989, video game publisher Accolade began exploring options to release some of their PC games on the console. At the time, Sega had a licensing deal in place for third-party developers that increased the costs to the developer. According to Accolade co-founder Alan Miller, "One pays them between $10 and $15 per cartridge on top of the real hardware manufacturing costs, so it about doubles the cost of goods to the independent publisher." To get around licensing, Accolade chose to seek an alternative way to bring their games to the Genesis. It did so by purchasing one in order to decompile the executable code of three Genesis games. Such information was used to program their new Genesis cartridges in a way that would allow them to disable the security lockouts on the Genesis that prevented unlicensed games from being played. This strategy was used successfully to bring Ishido: The Way of Stones to the Genesis in 1990. To do so, Accolade had copied Sega's copyrighted game code multiple times in order to reverse engineer the software of Sega's licensed Genesis games. As a result of piracy in some countries and unlicensed development issues, Sega incorporated a technical protection mechanism into a new edition of the Genesis released in 1990, referred to as the Genesis III. This new variation of the Genesis included a code known as the Trademark Security System (TMSS), which, when a game cartridge was inserted, would check for the presence of the string "SEGA" at a particular point in the memory contained in the cartridge. If the string was present, the console would run the game, and would briefly display the message: "" This system had a twofold effect: it added extra protection against unlicensed developers and software piracy and forced the Sega trademark to display when the game was powered up, making a lawsuit for trademark infringement possible if unlicensed software were to be developed. Accolade learned of this development at the Winter Consumer Electronics Show in January 1991, where Sega showed the new Genesis III and demonstrated it screening and rejecting an Ishido game cartridge. With more games planned for the following year, Accolade successfully identified the TMSS file. It later added this file to the games HardBall!, Star Control, Mike Ditka Power Football, and Turrican. In response to the creation of these unlicensed games, Sega filed suit against Accolade in the United States District Court for the Northern District of California, on charges of trademark infringement, unfair competition, and copyright infringement. In response, Accolade filed a counterclaim for falsifying the source of its games by displaying the Sega trademark when the game was powered up. Although the district court initially ruled for Sega and issued an injunction preventing Accolade from continuing to reverse engineer the Genesis, Accolade appealed the verdict to the United States Court of Appeals for the Ninth Circuit. As a result of the appeal, the Ninth Circuit overturned the district court's verdict and ruled that Accolade's decompilation of the Sega software constituted fair use. The court's written opinion followed on October 20, 1992, and noted that the use of the software was non-exploitative, although commercial. Further, the court found that the trademark infringement, being required by the TMSS for a Genesis game to run on the system, had been inadvertently triggered by a fair use act and was the fault of Sega for having caused false labeling. Ultimately, Sega and Accolade settled the case on April 30, 1993. As a part of this settlement, Accolade became an official licensee of Sega, and later developed and released Barkley Shut Up and Jam! while under license. The terms of the licensing, including whether or not any special arrangements or discounts were made to Accolade, were not released to the public. The financial terms of the settlement were also not disclosed, although both companies agreed to pay their own legal costs. Congressional hearings on video game violence In 1993, the American media began to focus on the mature content of certain video games. Games such as Night Trap for the Sega CD, an add-on, received unprecedented scrutiny. Issues about Night Trap were brought up in the United Kingdom, with former Sega of Europe development director Mike Brogan noting that "Night Trap got Sega an awful lot of publicity ... it was also cited in UK Parliament for being classified as '15' due to its use of real actors." This came at a time when Sega was capitalizing on its image as an edgy company with attitude, and this only reinforced that image. By far the year's most controversial game was Midway's Mortal Kombat, ported to the Genesis and SNES by Acclaim Entertainment. In response to public outcry over the game's graphic violence, Nintendo decided to replace the blood in the game with "sweat" and the arcade's gruesome "fatalities" with less violent finishing moves. Sega took a different approach, instituting America's first video game ratings system, the Videogame Rating Council (VRC), for all its current systems. Ratings ranged from the family-friendly GA rating to the more mature rating of MA-13, and the adults-only rating of MA-17. With the rating system in place, Sega released its version of Mortal Kombat, appearing to have removed all the blood and sweat effects and toning down the finishing moves even more than in the SNES version. However, all the arcade's blood and uncensored finishing moves could be enabled by entering a "Blood Code". This technicality allowed Sega to release the game with a relatively low MA-13 rating. Meanwhile, the tamer SNES version shipped without a rating. The Genesis version of Mortal Kombat was well-received by gaming press, as well as fans, outselling the SNES version three- or four-to-one, while Nintendo was criticized for censoring the SNES version. Executive vice president of Nintendo of America Howard Lincoln was quick to point out at the hearings that Night Trap had no such rating, saying to Senator Joe Lieberman: In response, Sega of America vice president Bill White showed a videotape of violent video games on the SNES and stressed the importance of rating video games. At the end of the hearing, Lieberman called for another hearing in February 1994 to check on progress toward a rating system for video game violence. As a result of the congressional hearings, Night Trap started to generate more sales and released ports to the PC, Sega 32X, and 3DO. According to Digital Pictures founder Tom Zito, "You know, I sold 50,000 units of Night Trap a week after those hearings." Although experiencing increased sales, Sega decided to recall Night Trap and re-release it with revisions in 1994 due to the congressional hearings. After the close of these hearings, video game manufacturers came together to establish the rating system that Lieberman had called for. Initially, Sega proposed the universal adoption of its system, but after objections by Nintendo and others, Sega took a role in forming a new one. This became the Entertainment Software Rating Board, an independent organization that received praise from Lieberman. With this new rating system in place for the 1994 holiday season, Nintendo decided its censorship policies were no longer needed, and the SNES port of Mortal Kombat II was released uncensored. 32-bit era and beyond Sega released two add-ons to increase the Genesis capabilities: a CD peripheral, the Sega CD (Mega-CD outside North America and Brazil), and a 32-bit peripheral, the Sega 32X. Worldwide, Sega sold 2.24 million Sega CD units and 800,000 32X units. Following the launch of the next-generation 32-bit Sony PlayStation and Sega Saturn, sales of 16-bit hardware and software continued to account for 64% of the video game market in 1995. Sega underestimated the continued popularity of the Genesis and did not have the inventory to meet demand. Sega captured 43% of the dollar share of the U.S. video game market and claimed to have sold more than two million Genesis units in 1995, while Genesis software such as Vectorman remained successful, but Kalinske estimated that "we could have sold another 300,000 Genesis systems in the November/December timeframe". Nakayama's decision to focus on the Saturn, based on the systems' relative performance in Japan, has been cited as the major contributing factor in this miscalculation. By contrast, Nintendo concentrated on the 16-bit home console market, as well as its successful handheld, the Game Boy, and took in 42% of the video game market dollar share without launching a 32-bit console. Following tensions with Sega Enterprises, Ltd. over its focus on the Saturn, Kalinske, who oversaw the rise of the Genesis in 1991, grew uninterested in the business and resigned in mid-1996. Sega sold 30.75 million Genesis units worldwide. Of these, 3.58 million were sold in Japan, and sales in Europe and the U.S. are roughly estimated at 8 million and 18–18.5 million as of June 1997 (at which time Sega was no longer manufacturing the system) respectively. In 1998, Sega licensed the Genesis to Majesco Entertainment to rerelease it in North America. Majesco began reselling millions of unsold cartridges at a budget price, together with 150,000 units of the second model of the Genesis. It released the Genesis 3, projecting to sell 1.5 million units of the console by the end of 1998. Tectoy continues to sell variants of the original hardware (including emulated variants) and has sold an estimated 3 million units in Brazil . Technical specifications The main microprocessor is a 16/32-bit Motorola 68000 CPU clocked at 7.6 MHz. An 8-bit Zilog Z80 processor controls the sound hardware and provides backward compatibility with the Master System. The Genesis has 64 kB of RAM, 64 kB of video RAM and 8 kB of audio RAM. It can display up to 61 colors at once from a palette of 512. The games are in ROM cartridge format and inserted in the top. The Genesis produces sound using a Texas Instruments SN76489 PSG, integrated with the Video Display Processor (VDP), and a Yamaha YM2612 FM synthesizer chip. The Z80 processor is primarily used to control both sound chips to produce stereo music and sound effects. Most revisions of the original Genesis contain a discrete YM2612 and a separate YM7101 VDP; in a later revision, the chips were integrated into a single custom ASIC (FC1004). The back of the model 1 console provides a radio frequency output port (designed for use with antenna and cable systems) and a specialized 8-pin DIN port, which both provide video and audio output. Both outputs produce monophonic sound; a headphone jack on the front of the console produces stereo sound. On the model 2, the DIN port, radio frequency output port, and headphone jack are replaced by a 9-pin mini-DIN port on the back for composite video, RGB and stereo sound, and the standard RF switch. Earlier model 1 consoles have a 9-pin extension port. An edge connector on the bottom right of the console can be connected to a peripheral. Peripherals The standard controller features a rounded shape, a directional pad, three main buttons, and a "start" button. In 1993, Sega released a slightly smaller pad with three additional face buttons, similar to the design of buttons on some popular arcade fighting games such as Street Fighter II. Sega also released a wireless revision of the six-button controller, the Remote Arcade Pad. The system is backward compatible with the Master System. The first peripheral released, the Power Base Converter (Master System Converter in Europe), allows Master System games to be played. It's designed for the Model 1. It will work with the Model 2, but the shell does block the power and AC ports, so the converter will either to have its shell modified or a pass thru adaptor needs to be used. The converter doesn't work with the Model 3 or the Nomad A second model, the Master System Converter 2, was released only in Europe for use with the Mega Drive II. It will still work with NTSC and PAL Model 1 and 2 consoles, though the Model 3 and Nomad still don't work with it. Other peripherals were released to add functionality. The Menacer is a wireless, infrared light gun peripheral used with compatible games. Other third parties created light gun peripherals for the Genesis, such as the American Laser Games Pistol and the Konami Justifier. Released for art creation software, the Sega Mega Mouse features three buttons and is only compatible with a few games, such as Eye of the Beholder. A foam-covered bat called the BatterUP and the TeeVGolf golf club were released for both the Genesis and SNES. In November 1993, Sega released the Sega Activator, an octagonal device that lies flat on the floor and was designed to translate the player's physical movements into game inputs; it was first shown at the January 1993 Consumer Electronics Show (CES), where it was demonstrated with Streets of Rage 2. Several high-profile games, including Mortal Kombat and Street Fighter II: Special Champion Edition, were adapted to support the peripheral. The device was a commercial failure, due mainly to its inaccuracy and its high price point. IGN editor Craig Harris ranked the Sega Activator the third worst video game controller ever made. Both EA and Sega released multitaps to allow more than the standard two players to play at once. Initially, EA's version, the 4 Way Play, and Sega's adapter, the Team Player, only supported each publisher's games. In response to complaints about this, Sega publicly stated, "We have been working hard to resolve this problem since we learned of it", and that a new Team Player which would work with all multitap games for the console would be released shortly. Later games were created to work on both the 4 Way Play and Team Player. Codemasters also developed the J-Cart system, providing two extra ports on the cartridge itself, although the technology came late in the console's life and is only featured on a few games. Sega planned to release a steering wheel peripheral in 1994, and the Genesis version of Virtua Racing was advertised as being "steering wheel compatible", but the peripheral was cancelled. Network services In its first foray into online gaming, Sega created Sega Meganet, which debuted in Japan on November 3, 1990. Operating through a cartridge and a peripheral called the "Mega Modem", this allowed Mega Drive players to play a total of seventeen games online. A North American version, dubbed "Tele-Genesis", was announced at the Winter Consumer Electronics Show (Winter CES) in January 1990 but never released, though a version was operated in Brazil starting in 1995. Another phone-based system, the Mega Anser, turned the Japanese Mega Drive into an online banking terminal. In 1994, Sega started the Sega Channel, a game distribution system using cable television services Time Warner Cable and TCI. Using a special peripheral, Genesis players could download a game from a library of fifty each month and demos for upcoming releases. Games were downloaded to internal memory and deleted when the console was powered off. The Sega Channel reached 250,000 subscribers at its peak and ran until July 31, 1998, well past the release of the Sega Saturn. In an effort to compete with Sega, third-party developer Catapult Entertainment created the XBAND, a peripheral which allowed Genesis players to engage in online competitive gaming. Using telephone services to share data, XBAND was initially offered in five U.S. cities in November 1994. The following year, the service was extended to the SNES, and Catapult teamed up with Blockbuster Video to market the service, but as interest in the service waned, it was discontinued in April 1997. Library The Genesis library was initially modest, but eventually grew to contain games to appeal to all types of players. The initial pack-in game was Altered Beast, which was later replaced with Sonic the Hedgehog in 1991. Top sellers included Sonic the Hedgehog, its sequel Sonic the Hedgehog 2, and Disney's Aladdin. During development for the console, Sega Enterprises focused on developing action games, while Sega of America was tasked with developing sports games. A large part of the appeal of the Genesis library during the console's lifetime was the arcade-based experience of its games, as well as more difficult entries such as Ecco the Dolphin, and sports games such as Joe Montana Football. Compared to its competition, Sega advertised to an older audience by hosting more mature games, including the uncensored version of Mortal Kombat. Notably, the arcade hit Street Fighter II by Capcom initially skipped the Genesis, instead only being released on the SNES. However, as the Genesis continued to grow in popularity, Capcom eventually ported a version of Street Fighter II to the system known as Street Fighter II: Champion Edition, that would go on to sell over a million copies. One of the biggest third-party companies to support the Genesis early on was Electronic Arts. Trip Hawkins, founder and then president of EA, believed the faster drawing speed of the Genesis made it more suitable for sport games than the SNES, and credits EA's success on the Genesis for helping catapult the EA Sports brand. Another third-party blockbuster for the system was the port of the original Mortal Kombat. Although the arcade game was released on the SNES and Genesis simultaneously, the two ports were not identical. The SNES version looked closer to the arcade game, but the Genesis version allowed players to bypass censorship, helping make it more popular. In 1997, Sega of America claimed the Genesis had a software attach rate of 16 games sold per console, double that of the SNES. Sega Virtua Processor The Super NES can have enhancement chips inside each cartridge to produce more advanced graphics; for example, the launch game Pilotwings (1990) contains a digital signal processor. Later, the Super FX chip was designed to offload complex rendering tasks from the main CPU. It was first used in Star Fox (1993) for real-time 3D polygons, and Super Mario World 2: Yoshi's Island (1995) demonstrates rotation, scaling, and stretching of individual sprites and manipulates large areas of the screen. Sega had produced such effects on its arcade platforms, and adapted some to the home console by developing the Sega Virtua Processor (SVP). Based on a digital signal processor core by Samsung Electronics, this chip enables the Genesis to render polygons in real time and provides an "Axis Transformation" unit that handles scaling and rotation. Virtua Racing (1992) is the only game released with this chip and the only Genesis cartridge with any enhancement chip, running at a significantly higher and more stable frame rate than filled polygon games on the SNES. The chip drastically increased the cost of the cartridge, and at , Virtua Racing is the most expensive Genesis cartridge ever produced. Two other games, Virtua Fighter and Daytona USA, were planned for the SVP chip, but were instead moved into the Saturn's launch line-up. Sega planned to sell the SVP chip as a separate upgrade module for the Genesis, but it was canceled, in order to focus its efforts on more powerful 32X add-on. Add-ons In addition to accessories such as the Power Base Converter, the Genesis supports two add-ons that each support their own game libraries. The first is the Sega CD (known as the Mega-CD in all regions except for North America), a compact disc-based peripheral that can play its library of games in CD-ROM format. The second is the Sega 32X, a 32-bit peripheral which uses ROM cartridges and serves as a pass-through for Genesis games. Sega produced a custom power strip to fit the peripherals' large AC adapters. Both add-ons were officially discontinued in 1996. Sega CD By 1991, compact discs had gained in popularity as a data storage device for music and software. PCs and video game companies had started to make use of this technology. NEC had been the first to include CD technology in a game console with the release of the TurboGrafx-CD add-on, and Nintendo was making plans to develop its own CD peripheral as well. Seeing the opportunity to gain an advantage over its rivals, Sega partnered with JVC to develop a CD-ROM add-on for the Genesis. Sega launched the Mega-CD in Japan on December 1, 1991, initially retailing at JP¥49,800. The CD add-on was launched in North America on October 15, 1992, as the Sega CD, with a retail price of US$299; it was released in Europe as the Mega-CD in 1993. In addition to greatly expanding the potential size of its games, this add-on unit upgraded the graphics and sound capabilities by adding a second, more powerful processor, more system memory, and hardware-based scaling and rotation similar to that found in Sega's arcade games. It provided battery-backed storage RAM to allow games to save high scores, configuration data, and game progress. Shortly after its launch in North America, Sega began shipping the Sega CD with the pack-in game Sewer Shark, a full motion video (FMV) game developed by Digital Pictures, a company that became an important partner for Sega. Touting the benefits of the CD's comparatively vast storage space, Sega and its third-party developers produced a number of games for the add-on that include digital video in their gameplay or as bonus content, as well as re-releasing several cartridge-based games with high-fidelity audio tracks. In 1993, Sega released the Sega CD 2, a smaller and lighter version of the add-on designed for the Genesis II, at a reduced price compared to the original. A limited number of games were later developed that use both the Sega CD and the Sega 32X add-ons. The Mega-CD sold only 100,000 units during its first year in Japan, falling well below expectations. Although many consumers blamed its high launch price, it also suffered from a tiny software library; only two games were available at launch. This was due in part to the long delay before Sega made its software development kit available to third-party developers. Sales were higher in North America and Europe, although the novelty of FMV and CD-enhanced games quickly wore off, as many later games were met with lukewarm or negative reviews. In 1995, Sega announced a shift in focus to its new console, the Saturn, and discontinued advertising for Genesis hardware. The Sega CD sold 2.24 million units worldwide. Sega 32X With the release of the Saturn scheduled for 1995, Sega began developing a stopgap to bridge the gap between the Genesis and Saturn and serve as a less expensive entry into the 32-bit era. At the Winter Consumer Electronics Show in January 1994, Sega of America research and development head Joe Miller took a phone call from Nakayama, in which Nakayama stressed the importance of a quick response to the Atari Jaguar. One idea came from a concept from Sega Enterprises, "Project Jupiter," a new standalone console. Project Jupiter was initially planned as a new version of the Genesis, with an upgraded color palette and a lower cost than the Saturn, and limited 3D capabilities thanks to integration of ideas from the development of the Sega Virtua Processor chip. Miller suggested an alternative strategy, citing concerns with releasing a new console with no previous design specifications within six to nine months. At the suggestion from Miller and his team, Sega designed the 32X as a peripheral for the existing Genesis, expanding its power with two 32-bit SuperH-2 processors. The SH-2 had been developed in 1993 as a joint venture between Sega and Japanese electronics company Hitachi. At the end of the Consumer Electronics show, with the basic design of the 32X in place, Sega Enterprises invited Sega of America to assist in development of the new add-on. Although the new unit was a stronger console than originally proposed, it was not compatible with Saturn games. Before the 32X could be launched, the release date of the Saturn was announced for November 1994 in Japan, coinciding with the 32X's target launch date in North America. Sega of America now was faced with trying to market the 32X with the Saturn's Japan release occurring simultaneously. Their answer was to call the 32X a "transitional device" between the Genesis and the Saturn. This was justified by Sega's statement that both platforms would run at the same time and that the 32X would be aimed at players who could not afford the more expensive Saturn. The 32X was released in November 1994, in time for the holiday season. Demand among retailers was high, and Sega could not keep up orders for the system. More than 1,000,000 orders had been placed for 32X units, but Sega had only managed to ship 600,000 units by January 1995. Launching at about the same price as a Genesis console, the price of the 32X was less than half of what the Saturn's price would be at launch. Though positioning the console as an inexpensive entry into 32-bit gaming, Sega had a difficult time convincing third-party developers to create games for the new system. After an early run on the peripheral, news soon spread to the public of the upcoming release of the Sega Saturn, which would not support the 32X's games. The Saturn was released on May 11, 1995, four months earlier than its originally intended release date of September 2, 1995. The Saturn, in turn, caused developers to further shy away from the console and created doubt about the library for the 32X, even with Sega's assurances that there would be a large number of games developed for the system. In early 1996, Sega conceded that it had promised too much out of the 32X and decided to stop producing the system in order to focus on the Saturn. Prices for the 32X dropped to $99 and cleared out of stores at $19.95. Variations More than a dozen licensed variations of the Genesis/Mega Drive have been released. In addition to models made by Sega, alternate models were made by other companies, such as Majesco Entertainment, AtGames, JVC, Pioneer Corporation, Amstrad, and Aiwa. A number of bootleg clones were created during its lifespan. First-party models In 1993, Sega introduced a smaller, lighter version of the console, known as the Mega Drive II in Japan, Europe, and Australia and sold as Genesis (without the Sega prefix) in North America. This version omits the headphone jack, replaces the A/V-Out connector with a smaller version that supports stereo sound, and provides a simpler, less expensive mainboard that requires less power. Sega released a combined, semi-portable Genesis/Sega CD unit, the Genesis CDX (marketed as the Multi-Mega in Europe). This unit retailed at $399.95 in the US; this was roughly $100 more than the individual Genesis and Sega CD units put together, as the Sega CD had been reduced to $229 half a year before. The CDX was bundled with Sonic CD, Sega Classics Arcade Collection, and the Sega CD version of Ecco the Dolphin. The CDX features a small LCD screen that, when the unit is used to play audio CDs, displays the current track being played. With this feature and the system's lightweight build (weighing two pounds), Sega marketed it in part as a portable CD player. Late in the 16-bit era, Sega released a handheld version of the Genesis, the Genesis Nomad. Its design was based on the Mega Jet, a Mega Drive portable unit featured on airplane flights in Japan. As the only successor to the Game Gear, the Nomad operates on 6 AA batteries, displaying its graphics on a 3.25-inch (8.25-mm) LCD screen. The Nomad supports the entire Genesis library, but cannot be used with the Sega 32X, the Sega CD, or the Power Base Converter. Exclusive to the Japanese market was the TeraDrive, a Mega Drive combined with an IBM PC compatible computer. Sega also produced three arcade system boards based on the Mega Drive: the System C-2, the MegaTech, and the MegaPlay, which support approximately 80 games combined. Third-party models Working with Sega Enterprises, JVC released the Wondermega on April 1, 1992, in Japan. The system was later redesigned by JVC and released as the X'Eye in North America in September 1994. Designed by JVC to be a Genesis and Sega CD combination with high quality audio, the Wondermega's high price ($500 at launch) kept it out of the hands of average consumers. The same was true of the Pioneer LaserActive, which requires an add-on known as the Mega-LD pack, developed by Sega, in order to play Genesis and Sega CD games. Although the LaserActive was lined up to compete with the 3DO Interactive Multiplayer, the combined price of the system and the Mega-LD pack made it a prohibitively expensive option for Sega players. Aiwa released the CSD-GM1, a combination Genesis/Sega CD unit built into a boombox. Several companies added the Mega Drive to personal computers, mimicking the design of Sega's TeraDrive; these include the MSX models AX-330 and AX-990, distributed in Kuwait and Yemen, and the Amstrad Mega PC, distributed in Europe and Australia. After the Genesis was discontinued, Majesco Entertainment released the Genesis 3 as a budget version in 1998. A similar thing happened in Portugal, where Ecofilmes, Sega's distributor in the country, obtained a license to sell the Mega Game II. This version was more akin to the second first-party model, being noteworthy the inclusion of six-button controllers and a switch to alternate between different game regions, enabling this version to play all games without the need for any device or modification to bypass region locking. In 2009, AtGames began producing two new variations: the Firecore, which can play original Genesis cartridges as well as preloaded games, and a handheld console preloaded with 20 Genesis games. Companies such as Radica Games have released various compilations of Genesis and Mega Drive games in "plug-and-play" packages resembling the system's controller. Re-releases and emulation A number of Genesis and Mega Drive emulators have been produced, including GenEM, KGen, Genecyst, VGen, Gens, and Kega Fusion. The GameTap subscription gaming service included a Genesis emulator and had several dozen licensed Genesis games in its catalog. The Console Classix subscription gaming service includes an emulator and has several hundred Genesis games in its catalog. Compilations of Genesis games have been released for other consoles. These include Sonic Mega Collection and Sonic Gems Collection for PS2, Xbox, and Nintendo GameCube; Sega Genesis Collection for PS2 and PSP; and Sonic's Ultimate Genesis Collection (known as the Sega Mega Drive Ultimate Collection in PAL territories) for PlayStation 3 and Xbox 360. During his keynote speech at the 2006 Game Developers Conference, Nintendo president Satoru Iwata announced that Sega would make a number of Genesis/Mega Drive games available to download on the Wii's Virtual Console. There are select Genesis games available on the Xbox 360 through Xbox Live Arcade, such as Sonic the Hedgehog and Sonic 2, as well as games available via the PlayStation Network and Steam. Later releases On May 22, 2006, North American company Super Fighter Team released Beggar Prince, a game translated from a 1996 Chinese original. It was released worldwide and was the first commercial Genesis game release in North America since 1998. Super Fighter Team would later go on to release two more games for the system, Legend of Wukong and Star Odyssey. In December 2010, WaterMelon, an American company, released Pier Solar and the Great Architects, the first commercial role-playing video game specifically developed for the console since 1996, and was the biggest 16-bit game ever produced for the console at the time at 64 Mb (roughly 8 Megabytes). Pier Solar is the only cartridge-based game which can optionally use the Sega CD to play an enhanced soundtrack and sound effects disc. In 2013, independent programmer Future Driver, inspired by the Disney film Wreck-It Ralph, developed Fix-It Felix Jr. for the Genesis. In 2017, American company Mega Cat Games released Coffee Crisis, a Beat 'em up, for the Sega Genesis. On December 5, 2007, Tectoy released a portable version of the Genesis/Mega Drive with twenty built-in games. Another version called "Mega Drive Guitar Idol" comes with two six-button joypads and a guitar controller with five fret buttons. The Guitar Idol game contains a mix of Brazilian and international songs. The console has 87 built-in games, including some from Electronic Arts based on the mobile phone versions. It was announced that Tectoy had developed a new Genesis console that not only looks almost identical to the original model of the Genesis, but also has a traditional cartridge slot and SD card reader, which was released in June 2017. In 2009, Chinese company AtGames produced a Genesis/Mega Drive-compatible console, the Firecore. It features a top-loading cartridge slot and includes two controllers similar to the six-button controller for the original Genesis. The console has 15 games built-in and is region-free, allowing cartridge games to run regardless of their region. AtGames produced a handheld version of the console. Both machines have been released in Europe by distributing company Blaze Europe. In 2018, Sega announced a microconsole, the Genesis/Mega Drive Mini. The console includes 40 games, including Gunstar Heroes and Castlevania: Bloodlines, with different games for different regions and a save-anywhere function. Streets of Rage composer Yuzo Koshiro provided the menu music. The console was released on September 19, 2019. Crowdfunded Sega Mega Drive games have been released in recent years, with Tanglewood, a puzzle platformer being released on August 14, 2018, and Xeno Crisis released on October 28, 2019. Both games were created by indie-game developers using actual Sega development hardware to ensure compatibility with the Mega Drive. On December 16, 2020, Paprium, WaterMelon's follow up game to Pier Solar, was released after nearly a decade in development. Reception Reviewing the Genesis in 1995, Game Players noted that its rivalry with the Super NES was skewed by genre, with the Genesis having superior sports games and the Super NES superior RPGs. Commenting that the Genesis hardware was aging and the new software drying up, they recommended consumers buy a next-generation system or a Genesis Nomad instead, but also advised those who already owned a Genesis to not sell it. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the Genesis scores of 4.5, 5.0, 4.0, 4.5, and 7.5 - for all five editors, the lowest score they gave to any of the five consoles reviewed in the issue. While their chief criticisms were the lack of upcoming game releases and dated hardware, they also concurred that the Genesis was clearly inferior to the Super NES in terms of graphics capabilities, sound chip, and games library. John Ricciardi, in particular, considered the Genesis overrated, saying he had consistently found more enjoyment in both the Super NES and TurboGrafx-16, while Dan Hsu and Crispin Boyer recommended it based on its selection of classic titles and the high value-for-money of the six pack-in games Sega was offering at the time. Legacy The Genesis has often ranked among the best video game consoles. In 2009, IGN named it the fifth best video game console, citing its edge in sports games and better home version of Mortal Kombat, and lauding "what some consider to be the greatest controller ever created: the six button". In 2007, GameTrailers named the Genesis as the sixth best console of all time in their list of top ten consoles that "left their mark on the history of gaming", noting its great games and solid controller, and writing of the "glory days" of Sonic the Hedgehog. In January 2008, technology columnist Don Reisinger proclaimed that the Genesis "created the industry's best console war to date", citing Sonic the Hedgehog, superior sports games, and backward compatibility with the Sega Master System. In 2008, GamingExcellence ranked it sixth of the 10 best consoles, declaring, "one can truly see the Genesis for the gaming milestone it was." At the same time, GameDaily rated it ninth of ten for its memorable games. In 2014, USgamer's Jeremy Parish wrote, "If the Atari generation introduced video games as a short-lived '70s fad ... and the NES generation established it into an enduring obsession for the young, Sega's Genesis began pushing the medium toward something resembling its contemporary form", expounding that the system served as "the key incubator for modern sports franchises", made "consoles truly international" by providing Western third-parties previously put at a disadvantage by Nintendo's restrictive licensing policies with a more profitable alternative, created "an online subscription service" that foreshadowed "PlayStation Plus more than 15 years early" with the Sega Channel, and "played a key role in ensuring the vitality and future of the games industry by breaking Nintendo's near-monopolistic hold on the U.S. and awakening the U.K. to the merits of television gaming". For his part, Kalinske highlighted Sega's role in developing games for an older demographic and pioneering "the concept of the 'street date with the simultaneous North American and European release of Sonic the Hedgehog 2. John Sczepaniak of Retro Gamer noted, "It was a system where the allure was born not only of the hardware and games, but the magazines, playground arguments, climate, and politics of the time." Sega of America's marketing campaign for the Genesis was widely emulated, influencing marketing in the subsequent generation of consoles. See also List of best-selling Sega Genesis games Neo Geo Philips CD-i Notes References External links Fourth-generation video game consoles Home video game consoles Products introduced in 1988 Products introduced in 1989 Products introduced in 1990 Products and services discontinued in 1997 Products and services discontinued in 1999
182010
https://en.wikipedia.org/wiki/Public%20library
Public library
A public library is a library that is accessible by the general public and is usually funded from public sources, such as taxes. It is operated by librarians and library paraprofessionals, who are also civil servants. There are five fundamental characteristics shared by public libraries: they are generally supported by taxes (usually local, though any level of government can and may contribute); they are governed by a board to serve the public interest; they are open to all, and every community member can access the collection; they are entirely voluntary in that no one is ever forced to use the services provided; and they provide basic services without charge. Public libraries exist in. many countries across the world and are often considered an essential part of having an educated and literate population. Public libraries are distinct from research libraries, school libraries, and other special libraries in that their mandate is to serve the general public's information needs rather than the needs of a particular school, institution, or research population. Public libraries also provide free services such as preschool story times to encourage early literacy, quiet study and work areas for students and professionals, or book clubs to encourage appreciation of literature in adults. Public libraries typically allow users to borrow books and other materials, i.e., take off the premises temporarily; they also have non-circulating reference collections and provide computer and Internet access to patrons. Overview The culmination of centuries of advances in the printing press, moveable type, paper, ink, publishing, and distribution, combined with an ever-growing information-oriented middle class, increased commercial activity and consumption, new radical ideas, massive population growth and higher literacy rates forged the public library into the form that it is today. Public access to books is not new. Romans made scrolls in dry rooms available to patrons of the baths, and tried with some success to establish libraries within the empire. In the middle of the 19th century, the push for truly public libraries, paid for by taxes and run by the state gained force. Matthew Battles states that: It was in these years of class conflict and economic terror that the public library movement swept through Britain, as the nation's progressive elite recognized that the light of cultural and intellectual energy was lacking in the lives of commoners. Public libraries were often started with a donation, or were bequeathed to parishes, churches, schools or towns. These social and institutional libraries formed the base of many academic and public library collections of today. The establishment of circulating libraries in the 18th century, by booksellers and publishers provided a means of gaining profit and creating social centers within the community. The circulating libraries not only provided a place to sell books, but also a place to lend books for a price. These circulating libraries provided a variety of materials including the increasingly popular novels. Although the circulating libraries filled an important role in society, members of the middle and upper classes often looked down upon these libraries that regularly sold material from their collections and provided materials that were less sophisticated. Circulating libraries also charged a subscription fee. However, these fees were set to entice their patrons, providing subscriptions on a yearly, quarterly or monthly basis, without expecting the subscribers to purchase a share in the circulating library. This helped patrons who could not afford to buy books, to be able to borrow books to read, and then return. This also created a more popular demand, as book fees were growing, and more books were being copied. Circulating libraries were very popular, the first one was located in 1725, in Edinbrough, Scotland by Allan Ramsay. Circulating libraries were not exclusively lending institutions and often provided a place for other forms of commercial activity, which may or may not be related to print. This was necessary because the circulating libraries did not generate enough funds through subscription fees collected from its borrowers. As a commerce venture, it was important to consider the contributing factors such as other goods or services available to the subscribers. The Malatestiana Library (), also known as the Malatesta Novello Library, is a public library dating from 1452 in Cesena, Emilia-Romagna (Italy). It was the first European civic library, i.e. belonging to the Commune and open to everybody. It was commissioned by the Lord of Cesena, Malatesta Novello. The works were directed by Matteo Nuti of Fano (a scholar of Leon Battista Alberti) and lasted from 1447 to 1452. History Historian Yahya ibn Said al-Antaki (d. 1066) reported that the Fatimid Caliph Al-Hakim bi-Amr Allah () financed and established libraries open to the public, where anyone, even the simple laymen, could choose whatever books they wanted and have them copied for them by public scribes, free of charge. However, as with many of his other decisions, Al-Hakim later ordered this policy to be reversed. In Cesena, Italy, the first community-run public library, the Malatestiana Library, was established in 1447, provided both secular and religious texts in Latin, Greek, and Hebrew, and was fully open to all members of the public. Another early library that allowed access to the public was that of the Kalendars or Kalendaries, a brotherhood of clergy and laity who were attached to the Church of All-Hallowen or All Saints in Bristol, England. Records show that in 1464, provision was made for a library to be erected in the house of the Kalendars, and reference is made to a deed of that date by which it was "appointed that all who wish to enter for the sake of instruction shall have 'free access and recess' at certain times". In 1598, Francis Trigge established a library in a room above St. Wulfram's Church in Grantham, Lincolnshire, and decreed that it should be open to the clergy and residents of the surrounding neighborhood. Some scholars consider this library an "ancestor" to public libraries, since its patrons did not need to belong to an existing organization like a church or college to use it. However, all the books in the library were chained to stalls and unavailable to borrow, hence its name: the Francis Trigge Chained Library. In the early years of the 17th century, many famous collegiate and town libraries were founded in England. Norwich City library was established in 1608 (six years after Thomas Bodley founded the Bodleian Library, which was open to the "whole republic of the learned") and Chetham's Library in Manchester, which claims to be the oldest public library in the English-speaking world, opened in 1653. In his seminal work Advis pour dresser une bibliothèque (1644) the French scholar and librarian Gabriel Naudé asserted that only three libraries in all Europe granted in his times regular access to every scholar, namely the Biblioteca Ambrosiana in Milan, the Biblioteca Angelica in Rome, and the Bodleian Library in Oxford. Enlightenment-era libraries Claude Sallier, the French philologist and churchman, operated an early form of public library in the town of Saulieu from 1737 to 1750. He wished to make culture and learning accessible to all people. The Załuski Library (, ) was built in Warsaw 1747–1795 by Józef Andrzej Załuski and his brother, Andrzej Stanisław Załuski, both Roman Catholic bishops. The library was open to the public and indeed was the first Polish public library, the biggest in Poland and one of the earliest public libraries in Europe. At the start of the 18th century, libraries were becoming increasingly public and were more frequently lending libraries. The 18th century saw the switch from closed parochial libraries to lending libraries. Before this time, public libraries were parochial in nature and libraries frequently chained their books to desks. Libraries also were not uniformly open to the public. In 1790, The Public Library Act would not be passed for another sixty-seven years. Even though the British Museum existed at this time and contained over 50,000 books, the national library was not open to the public, or even to a majority of the population. Access to the Museum depended on passes, for which there was sometimes a waiting period of three to four weeks. Moreover, the library was not open to browsing. Once a pass to the library had been issued, the reader was taken on a tour of the library. Many readers complained that the tour was much too short. Similarly, the Bibliothèque du Roi in Paris required a potential visitor to be “carefully screened” and, even after this stipulation was met, the library was open only two days per week and only to view medallions and engravings, not books. However, up until the mid 19th century, there were virtually no public libraries in the sense in which we now understand the term, i.e. libraries provided from public funds and freely accessible to all. Only one important library in Britain, namely Chetham's Library in Manchester, was fully and freely accessible to the public. In Germany, there was another occurrence of an accessible public library. The Ducal Library at Wolfenbüttel was open “every weekday morning and afternoon” and loaned its books to the public. Between 1714 and 1799, the library loaned 31,485 books to 1,648 different users. These types of public libraries, much closer to the present-day concept of the public library, were extremely rare as most libraries remained difficult to access. In A.D 1820 the State Central Library, Kerala started functioning in Trivandrum, India which is not only India's first public library but also the first such institution outside of Europe. However, there had come into being a whole network of library provision on a private or institutional basis. Subscription libraries, both private and commercial, provided the middle to upper classes with a variety of books for moderate fees. The increase in secular literature at this time encouraged the spread of lending libraries, especially the commercial subscription libraries. Commercial subscription libraries began when booksellers began renting out extra copies of books in the mid-18th century. Steven Fischer estimates that in 1790, there were "about six hundred rental and lending libraries, with a clientele of some fifty thousand". The mid- to late 18th century saw a virtual epidemic of feminine reading as novels became more and more popular. Novels, while frowned upon in society, were extremely popular. In England there were many who lamented at the "villanous profane and obscene books" and the opposition to the circulating library, on moral grounds, persisted well into the 19th century. Still, many establishments must have circulated many times the number of novels as of any other genre. In 1797, Thomas Wilson wrote in The Use of Circulating Libraries: "Consider, that for a successful circulating library, the collection must contain 70% fiction". However, the overall percentage of novels mainly depended on the proprietor of the circulating library. While some circulating libraries were almost completely novels, others had less than 10% of their overall collection in the form of novels. The national average start of the 20th century hovered around novels comprising about 20% of the total collection. Novels varied from other types of books in many ways. They were read primarily for enjoyment instead of for study. They did not provide academic knowledge or spiritual guidance; thus they were read quickly and far fewer times than other books. These were the perfect books for commercial subscription libraries to lend. Since books were read for pure enjoyment rather than for scholarly work, books needed to become both cheaper and smaller. Small duodecimo editions of books were preferred to the large folio editions. Folio editions were read at a desk, while the small duodecimo editions could be easily read like the paperbacks of today. The French journalist, Louis-Sébastien Mercier wrote that the books were also separated into parts so that readers could rent a section of the book for some hours instead of a full day. This allowed more readers could have access to the same work at the same time, making it more profitable for the circulating libraries. Much like paperbacks of today, many of the novels in circulating libraries were unbound. At this period of time, many people chose to bind their books in leather. Many circulating libraries skipped this process. Circulating libraries were not in the business of preserving books; their owners wanted to lend books as many times as they possibly could. Circulating libraries had ushered in a completely new way of reading. Reading was no longer simply an academic pursuit or an attempt to gain spiritual guidance. Reading became a social activity. Many circulating libraries were attached to the shops of milliners or drapers. They served as much for social gossip and the meeting of friends as coffee shops do today. Another factor in the growth of subscription libraries was the increasing cost of books. In the last two decades of the century, especially, prices were practically doubled, so that a quarto work cost a guinea, an octavo 10 shillings or 12 shillings, and a duodecimo cost 4 shillings per volume. Price apart, moreover, books were difficult to procure outside London, since local booksellers could not afford to carry large stocks. Commercial libraries, since they were usually associated with booksellers, and also since they had a greater number of patrons, were able to accumulate greater numbers of books. The United Public Library was said to have a collection of some 52,000 volumes – twice as many as any private-subscription library in the country at that period. These libraries, since they functioned as a business, also lent books to non-subscribers on a per-book system. Despite the existence of these subscription libraries, they were only accessible to those who could afford the fees and to those with time to read during the daylight. As stated by James Van Horn Melton, “one should not overstate the extent to which lending libraries ‘democratized’ reading” since “they were probably less important for creating new readers than for enabling those who already read to read more." For many people, these libraries, though more accessible than libraries such as the British Library, were still largely an institution for the middle and upper-classes. Private-subscription libraries Private-subscription libraries functioned in much the same manner as commercial subscription libraries, though they varied in many important ways. One of the most popular versions of the private-subscription library was the "gentlemen only" library. The gentlemen's subscription libraries, sometimes known as proprietary libraries, were nearly all organized on a common pattern. Membership was restricted to the proprietors or shareholders, and ranged from a dozen or two to between four and five hundred. The entrance fee, i.e. the purchase price of a share, was in early days usually a guinea, but rose sharply as the century advanced, often reaching four or five guineas during the French wars; the annual subscription, during the same period, rose from about six shillings to ten shillings or more. The book-stock was, by modern standards, small (Liverpool, with over 8,000 volumes in 1801, seems to have been the largest), and was accommodated, at the outset, in makeshift premises–-very often over a bookshop, with the bookseller acting as librarian and receiving an honorarium for his pains. The Liverpool Subscription library was a gentlemen only library. In 1798, it was renamed the Athenaeum when it was rebuilt with a newsroom and coffeehouse. It had an entrance fee of one guinea and annual subscription of five shillings. An analysis of the registers for the first twelve years provides glimpses of middle-class reading habits in a mercantile community at this period. The largest and most popular sections of the library were History, Antiquities, and Geography, with 283 titles and 6,121 borrowings, and Belles Lettres, with 238 titles and 3,313 borrowings. The most popular single work was John Hawkesworth's Account of Voyages ... in the Southern Hemisphere (3 vols) which was borrowed on 201 occasions. The records also show that in 1796, membership had risen by 1/3 to 198 subscribers (of whom 5 were women) and the titles increased five-fold to 4,987. This mirrors the increase in reading interests. A patron list from the Bath Municipal Library shows that from 1793 to 1799, the library held a stable 30% of their patrons as female. It was also uncommon for these libraries to have buildings designated solely as the library building during the 1790s, though in the 19th century, many libraries would begin building elaborate permanent residences. Bristol, Birmingham, and Liverpool were the few libraries with their own building. The accommodations varied from the shelf for a few dozen volumes in the country stationer's or draper's shop, to the expansion to a back room, to the spacious elegant areas of Hookham's or those at the resorts like Scarborough, and four in a row at Margate. Private-subscription libraries held a greater amount of control over both membership and the types of books in the library. There was almost a complete elimination of cheap fiction in the private societies. Subscription libraries prided themselves on respectability. The highest percentage of subscribers were often landed proprietors, gentry, and old professions. Towards the end of the 18th century and in the first decades of the 19th century, the demand for books and general education made itself felt among social classes generated by the beginnings of the Industrial Revolution. The late-18th century saw a rise in subscription libraries intended for the use of tradesmen. In 1797, there was established at Kendal what was known as the Economical Library, "designed principally for the use and instruction of the working classes." There was also the Artizans' library established at Birmingham in 1799. The entrance fee was 3 shillings, and the subscription was 1 shilling 6 pence per quarter. This was a library of general literature. Novels, at first excluded, were afterwards admitted on condition that they did not account for more than one-tenth of the annual income. Modern public libraries Under the terms of the Museums Act of 1845, the municipalities of Warrington and Salford established libraries in their museums. Warrington Municipal Library opened in 1848. Although by the mid-19th century, England could claim 274 subscription libraries and Scotland, 266, the foundation of the modern public library system in Britain is the Public Libraries Act 1850. The Act first gave local boroughs the power to establish free public libraries and was the first legislative step toward the creation of an enduring national institution that provides universal free access to information and literature. In the 1830s, at the height of the Chartist movement, there was a general tendency towards reformism in the United Kingdom. The middle classes were concerned that the workers' free time was not being well-spent. This was prompted more by Victorian middle class paternalism than by demand from the lower social orders. Campaigners felt that encouraging the lower classes to spend their free time on morally uplifting activities, such as reading, would promote greater social good. In 1835, and against government opposition, James Silk Buckingham, MP for Sheffield and a supporter of the temperance movement, was able to secure the Chair of the select committee which would examine "the extent, causes, and consequences of the prevailing vice of intoxication among the labouring classes of the United Kingdom" and propose solutions. Francis Place, a campaigner for the working class, agreed that "the establishment of parish libraries and district reading rooms, and popular lectures on subjects both entertaining and instructive to the community might draw off a number of those who now frequent public houses for the sole enjoyment they afford". Buckingham introduced to Parliament a Public Institution Bill allowing boroughs to charge a tax to set up libraries and museums, the first of its kind. Although this did not become law, it had a major influence on William Ewart MP and Joseph Brotherton MP, who introduced a bill which would "[empower] boroughs with a population of 10,000 or more to raise a ½d for the establishment of museums". This became the Museums Act 1845. The advocacy of Ewart and Brotherton then succeeded in having a select committee set up to consider public library provision. The Report argued that the provision of public libraries would steer people towards temperate and moderate habits. With a view to maximising the potential of current facilities, the committee made two significant recommendations. They suggested that the government should issue grants to aid the foundation of libraries and that the Museums Act 1845 should be amended and extended to allow for a tax to be levied for the establishment of public libraries. Objections were raised about the increase in taxation, the potential infringement on private enterprise and the existing library provision such as mechanics' institutes and the fear that it would give rise to "unhealthy social agitation". The Bill passed through Parliament as most MPs felt that public libraries would provide facilities for self-improvement through books and reading for all classes, and that the greater levels of education attained by providing public libraries would result in lower crime rates. Salford Museum and Art Gallery first opened in November 1850 as "The Royal Museum & Public Library", as the first unconditionally free public library in England. The library in Campfield, Manchester was the first library to operate a free lending library without subscription in 1852. Norwich lays claim to being the first municipality to adopt the Public Libraries Act 1850 (which allowed any municipal borough with a population of 100,000 or more to introduce a halfpenny rate to establish public libraries—although not to buy books). Norwich was the eleventh library to open, in 1857, after Winchester, Manchester, Liverpool, Bolton, Kidderminster, Cambridge, Birkenhead and Sheffield. The 1850 Act was noteworthy because it established the principle of free public libraries. In 1866, an amending Act was passed which eliminated entirely the population limit for the establishment of a library and replaced the two-thirds majority previously required for adoption with a simple majority. It also allowed neighbouring parishes to combine with an existing or potential library authority. Despite the rise in the level of tax public libraries could levy, it was still very difficult for boroughs to raise enough capital to fund new libraries. The growth of the public library movement in the wake of the 1850 Act relied heavily on the donations of philanthropists. County libraries were a later development, which were made possible by the establishment of County Councils in 1888. They normally have a large central library in a major town with smaller branch libraries in other towns and a mobile library service covering rural areas. Expansion The modern public library grew at a great pace at the end of the 19th century especially in the English-speaking world. Philanthropists and businessmen, including John Passmore Edwards, Henry Tate and Andrew Carnegie, helped to fund the establishment of large numbers of public libraries for the edification of the masses. Public libraries in North America developed from the 18th century to today; as the country grew more populous and wealthier, factors such as a push for education and desire to share knowledge led to broad public support for free libraries. In addition, money donations by private philanthropists provided the seed capital to get many libraries started. In some instances, collectors donated large book collections. The first modern public library in the world supported by taxes was the Peterborough Town Library in Peterborough, New Hampshire. It was "established in 1833." This was a small public library. The first large public library supported by taxes in the United States was the Boston Public Library, which was established in 1848 but did not open its doors to the public until 1854. The Redwood Library and Athenaeum was founded in 1747 by a group led by Abraham Redwood. It was the first library in Rhode Island and the oldest lending library in America. Over half of its volumes were lost when it was used as the British Officers Club during the Revolutionary War. An effort was made to replace the original collection. Over 90% of the volumes lost were returned. The library is still in use. A total of 1,689 Carnegie libraries were built in the United States between 1883 and 1929, including some belonging to universities. By 1930, half the American public libraries had been built by Carnegie. The first public library in Australia was the Melbourne Public Library (now the State Library of Victoria), which opened in 1856, just a few years after their introduction into Britain. This was however purely a reference library. In September 1869, the New South Wales (NSW) government opened as the Free Public Library, Sydney (now the State Library of New South Wales) by purchasing a bankrupt subscription library. In 1896, the Brisbane Public Library was established. The Library's collection, purchased by the Queensland Government from the private collection of Justice Harding. In 1935 the Free Library Movement was established in New South Wales advocating for free public libraries to be supported by municipal authorities. A similar movement was established in Victoria within a couple of years. Eugène Morel, a writer and one of the librarians at the Bibliothèque nationale, pioneered modern public libraries in France. He put forward his ideas in the 1910 book La Librairie publique. Services Book borrowing and lending The main task of public libraries is to provide the public with access to books and periodicals. The American Library Association (ALA), addresses this role of libraries as part of "access to information" and "equity of access"; part of the profession's ethical commitment that "no one should be denied information because he or she cannot afford the cost of a book or periodical, have access to the internet or information in any of its various formats." Libraries typically offer access to thousands, tens of thousands, or even millions of books, the majority of which are available for borrowing by anyone with the appropriate library card. A library's selection of books is called its collection, and usually includes a range of popular fiction, classics, nonfiction and reference works, books of public interest or under public discussion, and subscriptions to popular newspapers and magazines. Most libraries offer quiet space for reading, known as reading rooms. Borrowers may also take books home, as long as they return them at a certain time and in good condition. If a borrowed book is returned late, the library may charge a small library fine, though some libraries have eliminated fines in recent years. About two-thirds of libraries now provide access to e-books and digital or digitized periodicals as well as printed books. Many libraries offer assistance to borrowers, to select books, through specialist Readers' Advisory Services librarians. Public libraries also provide books and other materials for children. These items are often housed in a special section known as a children's library and attended to by a specialized children's librarian. Child oriented websites with on-line educational games and programs specifically designed for younger library users are becoming increasingly popular. Services may be provided for other groups, such as large print or Braille materials, Books on tape, young adult literature and other materials for teenagers, or materials in other than the national language (in foreign languages). Libraries also lend books to each other, a practice known as interlibrary loan. Interlibrary loan allows libraries to provide patrons access to the collections of other libraries, especially rare, infrequently used, specialized and/or out-of-print books. Libraries within the same system, such as a county system, may lend their books to each other, or libraries in different states may even use an interlibrary loan system. The selection, purchase and cataloging of books for a collection; the care, repair, and weeding of books; the organization of books in the library; readers' advisory; and the management of membership, borrowing and lending are typical tasks for a public librarian, an information professional with graduate-level education or experience in library and information science. Privacy In the United States, libraries are responsible for supporting the First Amendment and how it relates to their facilities through policies such as the American Library Association's Library Bill of Rights. The right to freedom of speech and information is significant to public libraries; one way of upholding this doctrine is to protect the privacy of all patrons that belong to a library. The concept of confidentiality is important because the First Amendment may be violated if a patron's information could possibly be shared. Patrons may not feel free to check out certain materials for fear it would later be revealed. Members of society need to be reassured that even if they borrow controversial or embarrassing materials, their privacy will be upheld. Some libraries require staff to talk about confidentiality or direct the patron to literature on the subject when creating a new library card for patrons. Digital engagement Part of the public library mission has become attempting to help bridge the digital divide. As more books, information resources, and government services are being provided online (see e-commerce and e-government), public libraries increasingly provide access to the Internet and public computers for users who otherwise would not be able to connect to these services. They can also provide community spaces to encourage the general population to improve their digital skills through Library Coding Clubs and Library makerspace. Almost all public libraries now house a computer lab. Internationally, public libraries offer information and communication technology (ICT) services, giving "access to information and knowledge" the "highest priority". While different countries and areas of the world have their own requirements, general services offered include free connection to the Internet, training in using the Internet, and relevant content in appropriate languages. In addition to typical public library financing, non-governmental organizations (NGOs) and business fund services that assist public libraries in combating the digital divide. In addition to access, many public libraries offer training and support to computer users. Once access has been achieved, there remains a large gap in people's online abilities and skills. For many communities, the public library is the only agency offering free computer classes, information technology learning and an affordable, interactive way to build digital skills. , 91% of libraries offer free wireless Internet to their patrons; 76% offer e-books for borrowing; and 90% offer formal or informal technology training. A significant service provided by public libraries is assisting people with e-government access and use of federal, state and local government information, forms and services. In 2006, 73% percent of library branches reported that they are the only local provider of free public computer and Internet access. A 2008 study found that "100 percent of rural, high poverty outlets provide public Internet access. Access to computers and the Internet is now nearly as important to library patrons as access to books. Classroom and meeting space Public libraries have a long history of functioning as community centers or public spaces for reading, study and formal and informal public meetings. In 1898, Andrew Carnegie, a prominent library philanthropist, built a library in Homestead, Pennsylvania, where his main steel mills were located. Besides a book collection, it included a bowling alley, an indoor swimming pool, basketball courts and other athletic facilities, a music hall, and numerous meeting rooms for local organizations. It sponsored highly successful semi-pro football and baseball teams. Even before the development of the modern public library, subscription libraries were often used as clubs or gathering places. They served as much for social gossip and the meeting of friends, as coffee shops do today. Throughout history, public libraries were touted as alternatives to dance halls or gentleman's clubs, and frequently built, organized and supported because of their equalizing and civilizing influence. Today, in-person and on-line programs for reader development, language learning, homework help, free lectures and cultural performances, and other community service programs are common offerings. The library storytime, in which books are read aloud to children and infants, is a cultural touchstone. Most public libraries offer frequent storytimes, often daily or even several times a day for different age groups. Some libraries have begun offering sensory storytimes for children and adults on the autism spectrum. Sensory storytimes give patrons "more ways to process information", especially considering people on the autism spectrum are concrete thinkers and/or might have sensory issues to fluorescent lightning or ambient noise other patrons might not notice. One of the most popular programs offered in public libraries is "summer reading" for children, families, and adults. Summer reading usually includes a list of books to read during summer holidays, as well as performances, book discussions or other celebrations of reading, culture and the humanities. Many libraries offer classes to the community such as tech clinics where patrons can bring in laptops and electronic devices and receive one on one attention in solving their problems and learning how to use them. Libraries may also offer free or inexpensive meeting space for community organizations and educational and entrepreneurial activity. The addition of makerspaces in libraries, beginning with the Fayetteville Free Library in 2011, offers the potential for new roles for public spaces and public libraries. Attendance at library programs increased by 22% between 2004 and 2008. Programming While in the past libraries were merely buildings to house their collections, most now utilize their space to offer programs or clubs regularly. Although some libraries will have similar programs with different names, such as book club, writing club or computer programs, most programs will differ based on the specific library and the community they serve. New studies have shown that librarians must research what their specific community needs, “because communities differ, however, the ways libraries implement these services differ as well. The [example of service response] offered at one library may vary significantly from [the same example] offered by another library. The differences are perfectly appropriate if they result from a tailoring of services to address local needs.” Websites like Pinterest have numerous ideas for creating programs for local patrons, while the website Instructables has DIY tutorials, complete with pictures, which is helpful for libraries on a budget. "Programs in the humanities and the arts that encourage people to think and talk about ethics and values, history, art, poetry, and other cultures are integral to the library’s mission." Adult programs Adult library programming in the United States initially had strong ties to adult education and adult literacy. Margaret E. Monroe traced these connections on the 25th anniversary of the U.S. Adult Education Act which was part of the Economic Opportunity Act of 1964. The American Library Association supported the "Adult Services in the Eighties" (ASE) project which replicated an earlier ALA 1952-53 survey, Adult Education Activities in Public Libraries by Helen Lyman Smith. The ASE project was conducted to provide planning for new directions for adult library services. Sources on the scope of adult services include "Where Would We Be without Them? Libraries and Adult Education Activities: 1966–91," "Twenty-First Century Public Library Adult Services,"Adult Programs in the Library, and Designing Adult Services Strategies For Better Serving Your Community. A national study of public library service to older adults was conducted in 2015. The New York Public Library offers over 93,000 programs to its patrons every year at its 87 different branches. Adult programs include Excel classes, writing club, adult coloring club, chess club, knitting club, and a jewelry making class. The Albuquerque Bernalillo County Library has an adult coloring club, a crochet/knitting/sewing club, a gardening club, a bead and string class, and a bilingual computer class. The Tampa–Hillsborough County Public Library System has 31 branches that offer the usual book clubs and writing clubs for adults. However, they also offer an early morning walking club, chair yoga classes, beginning computer classes, genealogy classes, walk-in tech help, and a coffee and French talk class. Teen programs The Orange County Library System offers numerous teen activities such as a Maker/DIY program, Audio Equipment Training, Sewing classes, Knitting classes, ESL classes, and Chess club. The Springfield Greene County Library has writing and book clubs as well as a tech training class, board game nights, movie nights, craft classes, and a My Little Pony club. The Pikes Peak Library District has math tutors for their teen patrons. They also offer writing and book clubs, a Dungeons and Dragons club, a coding lab, an anime club, guided meditation, and an occasional Super Smash Bros. Tournament. Children's programs The Belmont Public Library offers an array of children's programs including story times for various age groups, concerts, music classes, puppet shows, a maker club, and sing-along Saturdays. The Saratoga Springs Public Library also has numerous story times as well as Yoga for children, parent/child workshops, Spanish workshops, a read-to-a-dog program, and a Kindness club. The Chelmsford Public Library has a plethora of story times for ages birth to preschool. They also offer baby yoga, stay and play time, toddler rhyme time, a dads and donuts day, and an annual Gingerbread Festival. Diversity A significant goal of American libraries is to become more culturally diverse throughout the country. Public libraries are an equal access facility and want to make everyone feel welcome no matter their religion, race, ethnicity, sex, or financial status. To accomplish this goal, libraries are striving to find ways in which to make both staff and the library programs they provide more culturally sensitive. A starting point for most libraries is to find out the demographics in which they are located. Once the library system learns more about the community they serve, they can start building a collection and programs around it. Another suggestion from multiple experts says to hire staff that represents the society that the library is located in order to better relate and serve members of that society. By making culturally diverse programs, a library can be inviting to many members of the community. A few ways libraries accomplish this goal are by providing programs which are inclusive to many different cultures such as having lectures or events in different languages, including celebrations and holidays that are diverse, and by inviting speakers and authors from different cultures to come and talk. Research assistance Librarians at most public libraries provide reference and research help to the general public. This can include assisting students in finding reliable sources for papers and presentations; helping the public find answers to questions or evidence in a debate; or providing resources related to a specific event or topic. Reference assistance is usually provided through a reference interview which is usually conducted at a public reference desk but may also be conducted by telephone or online. Reference librarians may also help patrons develop an appropriate bibliography or works cited page for an academic paper. Depending on the size of the library, there may be multiple reference desks that deal with different topics. Large public, academic or research libraries may employ librarians that are experts in specific topics or subjects. Often the children's section in a public library has its own reference desk. At a smaller library, circulation and reference may occur at the same desk. The Internet has had a significant effect on the availability and delivery of reference services. Many reference works, such as the Encyclopædia Britannica, have moved entirely online, and the way people access and use these works has changed dramatically in recent decades. The rise of search engines and crowd-sourced resources such as Wikipedia have transformed the reference environment. In addition to the traditional reference interview, reference librarians have an increasing role in providing access to digitized reference works (including the selection and purchase of databases not available to the general public) and ensuring that references are reliable and presented in an academically acceptable manner. Librarians also have a role in teaching information literacy, so that patrons can find, understand and use information and finding aids like search engines, databases and library catalogs: for instance, patrons who lack access to expensive academic subscriptions can be taught to Unpaywall to access open access literature easily. Public libraries may answer millions of reference questions every year. The Boston Public Library answers more than one million reference questions annually. Reference collections In addition to their circulating collection, public libraries usually offer a collection of reference books, such encyclopedias, dictionaries, phone books and unique or expensive academic works. These books may not be available for borrowing, except under special circumstances. Reference books that are frequently used, such as phone books, may be housed in a special section called "ready reference." Some libraries also keep historical documents relevant to their particular town, and serve as a resource for historians in some instances. The Queens Public Library kept letters written by unrecognized Tiffany lamp designer Clara Driscoll, and the letters remained in the library until a curator discovered them. Some libraries may also serve as archives or government depositories, preserving historic newspapers, property records or government documents. Collections of unique or historical works are sometimes referred to as special collections; except in rare cases, these items are reference items, and patrons must use them inside the library under the supervision or guidance of a librarian. Local libraries' special collections may be of particular interest to people researching their family history. Libraries that are focused on collecting works related to particular families are genealogical libraries and may be housed in the same building as a public library. Many libraries—especially large, urban libraries—have large collections of photographs, digital images, rare and fragile books, artifacts and manuscripts available for public viewing and use. Digitization and digital preservation of these works is an ongoing effort, usually funded by grants or philanthropy. In 2005, the New York Public Library offered the "NYPL Digital Gallery" which made a collection of 275,000 images viewable over the web; while most of the contents are in the public domain, some images are still subject to copyright rules. Limited funding, copyright restrictions, a lack of expertise and poor provenance are barriers to the large-scale digitization of libraries' special collections. Other services Depending on a community's desires and needs, public libraries may offer many other resources and services to the public. In addition to print books and periodicals, most public libraries today have a wide array of other media including audiobooks, e-books, CDs, cassettes, videotapes, and DVDs. Certain libraries stock general materials for borrowing, such as pots, pans, sewing machines, and similar household items in order to appeal to a larger population. Collections of books and academic research related to the local town or region are common, along with collections of works by local authors. Libraries' storage space and lending systems may be used to lend a wide range of materials, including works of art, cake pans, seeds, tools and musical instruments. Similar to museums and other cultural institutions, libraries may also host exhibits or exhibitions. As more government services move online, libraries and librarians have a role in providing access to online forms and assistance with filling them out. For example, in 2013, American public libraries were promoted as a way for people to access online health insurance marketplaces created by the Affordable Care Act. In rural areas, the local public library may have a bookmobile service, consisting of one or more buses or pack animals (such as burros, camels, donkey, or elephants) furnished as small public libraries, some equipped with Internet access points or computer labs, and serving the countryside according to a regular schedule. In communities that are extremely isolated or that have poor digital infrastructure, libraries may provide the only access to online education, telemedicine, or telework. Libraries also partner with schools and community organizations to promote literacy and learning. 24-hour library access has been piloted in certain public libraries in North America, such as the Pioneer Library System's Norman Public Library in Oklahoma and Ottawa Public Library in Ontario. Such access may involve anywhere from a "library vending machine", in which print books are mechanically vended to (and dispensed from) patrons, to reduced staff during the night and early morning hours. Libraries promote cultural awareness; in Newark, New Jersey, the public library celebrated black history with exhibits and programs. One account suggested libraries were essential to "economic competitiveness" as well as "neighborhood vitality" and help some people find jobs. Libraries have in important role during emergencies and disasters, where they may be used as shelters, provide space to charge phones and access the Internet, and serve as locations for the distribution of aid, especially financial aid, which requires access to computers and the Internet. The U.S. Federal Emergency Management Agency recognizes libraries as providing essential community service during times of disaster. Libraries have also had in increasingly important economic role during the recession, providing job search assistance, computer skills training and resume help to patrons. In response to the COVID-19 Pandemic, many libraries have begun offering remote and distance learning options for patrons. Organization The establishment or development of a public library involves creating a legal authorization and governing structure, building a collection of books and media, as well as securing reliable funding sources, especially government sources. Most public libraries are small, serving a population of under 25,000, and are (or were) established in response to specific local needs. In A Library Primer, John Cotton Dana's 1899 work on the establishment and management of libraries in the United States, Dana wrote: Each community has different needs, and begins its library under different conditions. Consider then, whether you need most a library devoted chiefly to the work of helping the schools, or one to be used mainly for reference, or one that shall run largely to periodicals and be not much more than a reading room, or one particularly attractive to girls and women, or one that shall not be much more than a cheerful resting-place, attractive enough to draw man and boy from street corner and saloon. Decide this question early, that all effort may be concentrated to one end, and that your young institution may suit the community in which it is to grow, and from which it is to gain its strength. After being established and funded through a resolution, public referendum or similar legal process, the library is usually managed by a board of directors, library council or other local authority. A librarian is designated as the library director or library manager. In small municipalities, city or county government may serve as the library board and there may be only one librarian involved in the management and direction of the library. Library staff who are not involved in management are known in the United States and some other English-speaking countries as "library paraprofessionals" or "library support staff." They may or may not have formal education in library and information science. Support staff have important roles in library collection development, cataloging, technical support, and the process of preparing books for borrowing. All of these tasks may be referred to as technical services, whether or not they involve information technology. While the library's governing board has ultimate authority to establish policy, many other organizations may participate in library management or library fundraising, including civic and voluntary associations, women's clubs, Friends of the Library groups, and groups established to advise the library on the purchase and retention of books. State and national governments may also have a role in the establishment and organization of public libraries. Many governments operate their own large libraries for public and legislative use (e.g., state libraries, the Library of Congress, the Bibliothèque Nationale de France). These governments can also influence local libraries by reserving formal recognition or funding for libraries that meet specific requirements. Finally, associations of library and information professionals, such as the American Library Association (ALA) and the Chartered Institute of Library and Information Professionals (CILIP) help establish norms and standard procedures, secure funding, advocate at the state or national level and certify library schools or information schools. Funding Public libraries are funded through a wide combination of sources, the most significant which is usually local or municipal funding. The citizens who use a local library support it via the city or county government, or through a special-purpose district, which is a local government body that has independent leadership and may levy its own taxes. Local funding may be supplemented by other government funding. For example, in the United States, the state and federal governments provide supplementary funding for public libraries through state aid programs, the Library Services and Technology Act (LSTA) and E-Rate. In England, Local Authorities have a statutory duty to provide residents with a library service as set out in the Local Government Act 1974. State and local governments may also offer cities and counties large grants for library construction or renovation. Private philanthropy has also had a significant role in the expansion and transformation of library services, and, like other educational institutions, some libraries may be partially funded by an endowment. Some proactive librarians have devised alliances with patron and civic groups to supplement their financial situations. Library "friends" groups, activist boards, and well organized book sales also supplement government funding. Public funding has always been an important part of the definition of a public library. However, with local governments facing financial pressures due to the Great Recession, some libraries have explored ways to supplement public funding. Cafes, bakeries, bookstores, gift shops and similar commercial endeavors are common features of new and urban libraries. The Boston Public Library has two restaurants and an online store which features reproductions of photographs and artwork. Pressure on funding has also led to closer partnerships between libraries, and between libraries and for-profit ventures, in order to sustain the library as a public space while providing business opportunities to the community. While still fairly uncommon, public-private partnerships and "mixed-use" or "dual-use" libraries, which provide services to the public and one or more student populations, are occasionally explored as alternatives. Jackson County, Oregon (US), closed its entire 15-branch public library system for six months in 2007, reopening with under a public-private partnership and a reduced schedule. Small fees, such as library fines or printing fees, may also offset the cost of providing library services, though fines and fees do not usually have a significant role in library funding. The decline of support from local governments has left libraries compensating at the expense of their patrons. In the article ‘Waking Up to Advocacy in a New Political Reality for Libraries,” as early as the 1980s, libraries began charging fees and accruing fines for services rendered. These services included "printing, notarizing, scanning, photocopying, photo services, library cards for those who live outside of the service area, meeting room usage, document searches, inter-library loan, and e-book checkouts, and among many others". Data shows disparities in private and public libraries, exemplifying that libraries in rural areas possess weaker technological infrastructures and fewer full-time employees holding the title of Librarian. Data shows that funding and service levels differ across and within states. Rural libraries tend to have smaller collections, lower bandwidth rates, less staff and fewer hours of operations. Access to high quality internet may be limited for lower-income individuals, ethnic minorities and rural residents. Due to underused libraries in less-advantaged communities, local governments have permanently closed libraries effecting individuals that are less educated. Although usage of public libraries has increased significantly in recent decades, libraries are under intense financial pressure and scrutiny. The American Library Association says media reports it compiled in 2004 showed some $162 million in funding cuts to libraries nationwide. In 2009, 40% of states reported a decline in state aid for libraries. In 2012, Great Britain lost over 200 libraries to budget cuts, part of a general trend of fiscal austerity in Europe. However, there are signs of stabilization in library funding. , funding for construction and renovation of new libraries remains steady. Cities' plans to close public libraries are frequently cancelled or scaled back. In 2012, voters in 13 U.S. states approved new funding for library construction or operations. In the UK, the Library of Birmingham, which opened in 2013, is the largest cultural space in Europe. Survey data suggests the public values free public libraries. A Public Agenda survey in 2006 reported 84% of the public said maintaining free library services should be a top priority for their local library. Public libraries received higher ratings for effectiveness than other local services such as parks and police. But the survey also found the public was mostly unaware of financial difficulties facing their libraries. In various cost-benefit studies libraries continue to provide returns on the taxpayer dollar far higher than other municipal spending. A 2008 survey discusses comprehensively the prospects for increased funding in the United States, saying in conclusion "There is sufficient, but latent, support for increased library funding among the voting population." A 2013 Pew Research Center survey reported that 90% of Americans ages 16 and older said that the closing of their local public library would affect their community, with 63% saying it would have a "major" impact. See also Public libraries in North America Public Library Association, a division of the American Library Association References Further reading Barnett, Graham Keith (1973) The History of Public Libraries in France from the Revolution to 1939 Harris, Michael H. History of Libraries of the Western World (4th ed. Scarecrow Press, 1999); earlier editions were by Elmer Johnson Hughes, Kathleen M., and Jamie Wirsbinski Santoro (2021). Pivoting During the Pandemic: Ideas for Serving Your Community Anytime, Anywhere.Chicago. ALA Editions. Kranich, Nancy C. (2021) "Democracy, Community, and Libraries" in Mary Ann Davis Fournier and Sarah Ostman, eds Ask, Listen, Empower: Grounding Your Library Work in Community Engagement, pp. 1-15. Chicago: ALA editions. Lee, Robert E. 1966. Continuing Education of Adults through the American Public Library. Chicago: American Library Association. McCook, Kathleen de la Peña, Bossaller, J., & Thomas, F. (2018), Introduction to Public Librarianship, 3rd ed. Chicago: ALA Editions. Worpole, Ken (2013), Contemporary Library Architecture: A Planning and Design Guide, Routledge. Raphael, Molly. 2009. "The Transformational Power of Libraries in Tough Economic Times." Library Leadership & Management 23, no. 3: 106–151. Britain Black, Alistair. "Skeleton in the cupboard: social class and the public library in Britain through 150 years." Library History 16.1 (2000): 3-12. says "they have always been, and continue to be, an expression of liberal middleclass ideals." abstract Charing, S. "Self-Help v State Intervention: the 1850 Public Library Act as a Reflection of Mid-Victorian Doctrine," Australian Library Journal (1995) 44(1), pp. 47–54. Hayes, Emma, and Anne Morris. "Leisure role of public libraries A historical perspective." Journal of librarianship and information science 37.2 (2005): 75–81. abstract Hoare, P. (ed.) Cambridge History of Libraries in Britain and Ireland (Cambridge University Press, 2006) Kelly, Thomas, History of Public Libraries in Great Britain 1845–1965 (London: Library Association, 1973) Kelly, T & E. Kelly. Books for the People: an illustrated history of the British Public Library (London: Andre Deutsch, 1977) McMenemy, D. The Public Library (London: FACET, 2009) Minto, J. History of the Public Library Movement in Great Britain and Ireland (London: Library Association, 1932) Munford, William Arthur. Penny rate: aspects of British public library history, 1850–1950 (Library association, 1951) Murison, W. J. The Public Library: its origins, purpose and significance (2nd ed. London: Harrap, 1971) Overington, Michael A. The Subject Departmentalized Public Library. London: The Library Association, 1969. 167 p. Stockham, K. A., ed. British County Libraries: 1919–1969. (London: André Deutsch, 1969) Sturges, P. "Conceptualizing the public library 1850–1919." In Kinnell, M. and Sturges, P. (eds) Continuity and Innovation in the Public Library: the Development of a Social Institution (London: Library Association, 1996) Historiography Harris, Michael H. (1967) "Library history: a critical essay on the in-print literature." Journal of Library History (1967): 117–125. in JSTOR, covers the main books for major countries External links Stimulating Growth and Renewal of Public Libraries: The Natural Life Cycle as Framework "How did public libraries get started?" from The Straight Dope Hoyle, Alan (2020), "The Manchester Free Library Building, Home to the Spanish Instituto Cervantes" Seminar in Public Libraries Book promotion Types of library
664924
https://en.wikipedia.org/wiki/Biba%20Model
Biba Model
The Biba Model or Biba Integrity Model developed by Kenneth J. Biba in 1975, is a formal state transition system of computer security policy that describes a set of access control rules designed to ensure data integrity. Data and subjects are grouped into ordered levels of integrity. The model is designed so that subjects may not corrupt data in a level ranked higher than the subject, or be corrupted by data from a lower level than the subject. In general the model was developed to address integrity as the core principle, which is the direct inverse of the Bell–LaPadula model. Features In general, preservation of data integrity has three goals: Prevent data modification by unauthorized parties Prevent unauthorized data modification by authorized parties Maintain internal and external consistency (i.e. data reflects the real world) This security model is directed toward data integrity (rather than confidentiality) and is characterized by the phrase: "read up, write down". This is in contrast to the Bell-LaPadula model which is characterized by the phrase "read down, write up". In the Biba model, users can only create content at or below their own integrity level (a monk may write a prayer book that can be read by commoners, but not one to be read by a high priest). Conversely, users can only view content at or above their own integrity level (a monk may read a book written by the high priest, but may not read a pamphlet written by a lowly commoner). Another analogy to consider is that of the military chain of command. A General may write orders to a Colonel, who can issue these orders to a Major. In this fashion, the General's original orders are kept intact and the mission of the military is protected (thus, "read up" integrity). Conversely, a Private can never issue orders to his Sergeant, who may never issue orders to a Lieutenant, also protecting the integrity of the mission ("write down"). The Biba model defines a set of security rules, the first two of which are similar to the Bell–LaPadula model. These first two rules are the reverse of the Bell–LaPadula rules: The Simple Integrity Property states that a subject at a given level of integrity must not read data at a lower integrity level (no read down). The * (star) Integrity Property states that a subject at a given level of integrity must not write to data at a higher level of integrity (no write up). Invocation Property states that a process from below cannot request higher access; only with subjects at an equal or lower level. Implementations In FreeBSD, the Biba model is implemented by the mac_biba MAC policy. In Linux, the Biba model is implemented in the General Dynamics Mission Systems PitBull product. In XTS-400, the Biba model is implemented in the BAE Systems's XTS-400 operating system. See also Discretionary Access Control – DAC Graham–Denning model Mandatory Access Control – MAC Multilevel security – MLS Security-Enhanced Linux Security Modes of Operation Take-grant protection model Clark–Wilson model References External links "Integrity Policies" Power Point presentation from University of Colorado at Colorado Springs Computer security models
59955
https://en.wikipedia.org/wiki/Trojan%20Horse
Trojan Horse
The Trojan Horse refers to a wooden horse said to have been used by the Greeks, during the Trojan War, to enter the city of Troy and win the war. There is no Trojan Horse in Homer's Iliad, with the poem ending before the war is concluded. But in the Aeneid by Virgil, after a fruitless 10-year siege, the Greeks at the behest of Odysseus constructed a huge wooden horse and hid a select force of men inside, including Odysseus himself. The Greeks pretended to sail away, and the Trojans pulled the horse into their city as a victory trophy. That night the Greek force crept out of the horse and opened the gates for the rest of the Greek army, which had sailed back under cover of night. The Greeks entered and destroyed the city of Troy, ending the war. Metaphorically, a "Trojan horse" has come to mean any trick or stratagem that causes a target to invite a foe into a securely protected bastion or place. A malicious computer program that tricks users into willingly running it is also called a "Trojan horse" or simply a "Trojan". The main ancient source for the story still extant is the Aeneid of Virgil, a Latin epic poem from the time of Augustus. The story featured heavily in the Little Iliad and the Sack of Troy, both part of the Epic Cycle, but these have only survived in fragments and epitomes. As Odysseus was the chief architect of the Trojan Horse, it is also referred to in Homer's Odyssey. In the Greek tradition, the horse is called the "wooden horse" ( in Homeric/Ionic Greek (Odyssey 8.512); , in Attic Greek). Men in the horse Thirty of the Achaeans' best warriors hid in the Trojan horse's womb and two spies in its mouth. Other sources give different numbers: The Bibliotheca 50; Tzetzes 23; and Quintus Smyrnaeus gives the names of 30, but says there were more. In late tradition the number was standardized at 40. Their names follow: Literary accounts According to Quintus Smyrnaeus, Odysseus thought of building a great wooden horse (the horse being the emblem of Troy), hiding an elite force inside, and fooling the Trojans into wheeling the horse into the city as a trophy. Under the leadership of Epeius, the Greeks built the wooden horse in three days. Odysseus's plan called for one man to remain outside the horse; he would act as though the Greeks had abandoned him, leaving the horse as a gift for the Trojans. An inscription was engraved on the horse reading: "For their return home, the Greeks dedicate this offering to Athena". Then they burned their tents and left to Tenedos by night. Greek soldier Sinon was "abandoned" and was to signal to the Greeks by lighting a beacon. In Virgil's poem, Sinon, the only volunteer for the role, successfully convinces the Trojans that he has been left behind and that the Greeks are gone. Sinon tells the Trojans that the Horse is an offering to the goddess Athena, meant to atone for the previous desecration of her temple at Troy by the Greeks and ensure a safe journey home for the Greek fleet. Sinon tells the Trojans that the Horse was built to be too large for them to take it into their city and gain the favor of Athena for themselves. While questioning Sinon, the Trojan priest Laocoön guesses the plot and warns the Trojans, in Virgil's famous line Timeo Danaos et dona ferentes ("I fear Greeks, even those bearing gifts"), Danai ( Danaos) or Danaans (Homer's name for the Greeks) being the ones who had built the Trojan Horse. However, the god Poseidon sends two sea serpents to strangle him and his sons Antiphantes and Thymbraeus before any Trojan heeds his warning. According to Apollodorus the two serpents were sent by Apollo, whom Laocoön had insulted by sleeping with his wife in front of the "divine image". In the Odyssey, Homer says that Helen of Troy also guesses the plot and tries to trick and uncover the Greek soldiers inside the horse by imitating the voices of their wives, and Anticlus attempts to answer, but Odysseus shuts his mouth with his hand. King Priam's daughter Cassandra, the soothsayer of Troy, insists that the horse will be the downfall of the city and its royal family. She too is ignored, hence their doom and loss of the war. This incident is mentioned in the Odyssey: The most detailed and most familiar version is in Virgil's Aeneid, Book II (trans. A. S. Kline). Book II includes Laocoön saying: "" ("Do not trust the horse, Trojans! Whatever it is, I fear the Danaans [Greeks], even those bearing gifts.") Well before Virgil, the story is also alluded to in Greek classical literature. In Euripides' play Trojan Women, written in 415 BC, the god Poseidon proclaims: "For, from his home beneath Parnassus, Phocian Epeus, aided by the craft of Pallas, framed a horse to bear within its womb an armed host, and sent it within the battlements, fraught with death; whence in days to come men shall tell of 'the wooden horse,' with its hidden load of warriors." Factual explanations There has been speculation that the Trojan Horse may have been a battering ram or other sort of siege engine resembling, to some extent, a horse, and that the description of the use of this device was then transformed into a myth by later oral historians who were not present at the battle and were unaware of that meaning of the name. Assyrians then used siege machines with animal names that were often covered with dampened horse hides to protect against flaming arrows; it is possible that the Trojan Horse was such. Pausanias, who lived in the 2nd century AD, wrote in his book Description of Greece, "That the work of Epeius was a contrivance to make a breach in the Trojan wall is known to everybody who does not attribute utter silliness to the Phrygians"; by the Phrygians, he meant the Trojans. Some authors have suggested that the gift might also have been a ship, with warriors hidden inside. It has been noted that the terms used to put men in the horse are those used by ancient Greek authors to describe the embarkation of men on a ship and that there are analogies between the building of ships by Paris at the beginning of the Trojan saga and the building of the horse at the end; ships are called "sea-horses" once in the Odyssey. This view has recently gained support from naval archaeology: ancient text and images show that a Phoenician merchant ship type decorated with a horse head, called hippos ('horse') by Greeks, became very diffuse in the Levant area around the beginning of the 1st millennium BC and was used to trade precious metals and sometimes to pay tribute after the end of a war. That has caused the suggestion that the original story viewed the Greek soldiers hiding inside the hull of such a vessel, possibly disguised as a tribute, and that the term was later misunderstood in the oral transmission of the story, the origin to the Trojan horse myth. Ships with a horsehead decoration, perhaps cult ships, are also represented in artifacts of the Minoan/Mycenaean era; the image on a seal found in the palace of Knossos, dated around 1200 BC, which depicts a ship with oarsmen and a superimposed horse figure, originally interpreted as a representation of horse transport by sea, may in fact be related to this kind of vessels, and even be considered as the first (pre-literary) representation of the Trojan Horse episode. A more speculative theory, originally proposed by Fritz Schachermeyr, states that the Trojan Horse is a metaphor for a destructive earthquake that damaged the walls of Troy and allowed the Greeks inside. In his theory, the horse represents Poseidon, who as well as being god of the sea was also god of horses and earthquakes. The theory is supported by the fact that archaeological digs have found that Troy VI was heavily damaged in an earthquake but is hard to square with the mythological claim that Poseidon himself built the walls of Troy in the first place. Modern metaphorical use The term "Trojan horse" is used metaphorically to mean any trick or strategy that causes a target to invite a foe into a securely protected place; or to deceive by appearance, hiding malevolent intent in an outwardly benign exterior; to subvert from within using deceptive means. Artistic representations Pictorial representations of the Trojan Horse earlier than, or contemporary to, the first literary appearances of the episode can help clarify what was the meaning of the story as perceived by its contemporary audience. There are few ancient (before 480 BC) depictions of the Trojan Horse surviving. The earliest is on a Boeotian fibula dating from about 700 BC. Other early depictions are found on two relief pithoi from the Greek islands Mykonos and Tinos, both generally dated between 675 and 650 BC. The one from Mykonos (see figure) is known as the Mykonos vase. Historian Michael Wood dates the Mykonos vase to the eighth century BC, before the written accounts attributed by tradition to Homer, and posits this as evidence that the story of the Trojan Horse existed before those accounts were written. Other archaic representations of the Trojan horse are found on a Corinthian aryballos dating back to 560 BC (see figure), on a vase fragment to 540 BC (see figure), and on an Etruscan carnelian scarab. Citations External links "Earthquakes toppled ancient cities": "Don't blame the Trojan Horse: Earthquakes toppled ancient cities, Stanford geophysicist says" Aeneid Ancient Greek technology Horses in culture Individual wooden objects Military deception Mythological horses Sparta Trojan War
14058418
https://en.wikipedia.org/wiki/Bindura%20Nickel%20Corporation
Bindura Nickel Corporation
Bindura Nickel Corporation (BNC) is a mining company based in Zimbabwe's Mashonaland Central. Bindura operates mines and a smelter complex in the area of Bindura, Zimbabwe. BNC is operated and majority-owned Mwana Africa plc, an African multinational mining company based in Johannesburg. BNC is listed on the Zimbabwe Stock Exchange. The chairman of the company is Kalaa Mpinga. History Creation of company The company's founding CEO was Kalaa Mpinga. In 2003, Anglo American sold its stake in the company, selling its 52.9% stake in Bindura Nickel Corporation to Mwana Africa Holdings for $8 million. In 2004, ASA Resource Group acquired the majority share of BNC from Anglo American Corporation, owning about 75%. BNC's debts subsequently went over $31 million. After the murder of BNC's chairman Leonard Chimimba in May 2006, there were media reports speculating foul play by the Botswana government. BNC hired an investigation firm in South Africa, and said there was "nothing unusual" about his death in a carjacking. After a plunge in nickel prices, in 2008 BNC began operating on "care and maintenance," i.e. sans production. At the time of its shutdown, it was the only integrated nickel miner, smelter and refinery in Africa. The cost of resuming the Trojan mine was estimated at $50 million, and the Bindura complex at $150 million. Mwana owned 53% of the company. The shut down of BNC in 2008 resulted in many job losses without compensation and with employees being told not to look for other jobs as they would be reinstated once the mine re-opens. Employees who were retrenched received a stipend of $50 a month. Given that many employees have families, many left the mines with no compensation and have since struggled to make a living with very little consequence to the company. Mwana was considering delisting BNC from the Zimbabwe Stock Exchange in 2009, as it sought to raised up to $150 million to restart operations in Bindura under CEO Kalaa Mpinga. In September 2011, the company received a $10 million dollar loan to pay for operations while the company sought restart funding. In 2012, after it was closed, BNC stated that it would again begin underground operations at Trojan Nickel Mine. In 2012, China International Mining Group Corporation spent $21 million to recapitalise BNC. A $43.7 million impairment charge for Bindura Nickel Corporation’s operations in Zimbabwe resulted in huge losses for its parent company, Mwana Africa in 2013. The losses totaled $42.5 million, more than $35 million more than the previous year. At the time, the CEO of BNC was Kalaa Mpinga. Prior to the impairment, the company had shown its first profit since 2007, with revenues in 2013 of $109 million. In 2013, Mpinga stated that a plan to develop Bindura's " refinery and smelter at Mwana's nickel operations" would be enacted by 2014. The mine and plant had been put on hold in 2008 due to weak nickel prices, but had been restarted in 2012. 2014-2015 BNC in 2014 was operating the Freda Rebecca gold mine in Zimbabwe. The mine was complying with a law newly enforced by Zimbabwe that required 51% of the company be held by black nationals of the country. The company was the best performer on the Zimbabwe Stock Exchange in 2014. After a dip in Bindura Nickel Corporation's share price in 2015, there were reports that the Zimbabwe Stock Exchange had struggled as a result, with the mining index dropping 50% that year. BNC that year was the second worst performer on the exchange. In 2015, ASA gained control of the company and its assets, after acquiring BNC's holding company Mwana Africa and expelling founder Kalaa Mpinga from the company. The "coup" was orchestrated by Chinese International Mining Group Corporation (CIMGC), BNC's largest shareholder, and CIMGC chairman Yot Hoi Ning. BNC was 75% owned by Mwana Africa in August 2015, with Kalaa Mpinga bidding to retain his position as BNC chairman despite being forced out of Mwana on June 10 after clashes with shareholders. In 2015, the company stopped its Trojan Mine restart program and re-deep project. The company continued a restructuring in 2015, firing 350 workers, with intent to reduce headcount by up to 1,150 people. Yat Hoi Ning was serving as executive chairman. In 2016, the company cut production costs, increasing gross production and increasing profit by 261%. The following year, their Trojan mine smelter was stalled over a lack of funding. At the time, it was still Africa's only integrated nickel producer. Sale by ASA After 2017, Chinese group ASA Resources Group said it was selling its stake in Bindura Nickel Corporation. At the time, ASA operated Trojan Nickel Mine through BNC. In January 2017, Asa Resources owned 76% of BNC. By July 2018, the company was in the red, with assets of $28 million and liabilities totaling $32.7 million. At that point, the smelter restart project at Bindura had been suspended due to funding constraints, and was 83% complete. At that time, the company pointed to hobbling power costs in Zimbabwe, and was pressing the government for a "standstill on electricity costs in order to keep the industry viable." In 2019, BNC was lobbying the Reserve Bank of Zimbabwe to change forex rules relating to mining companies. In 2019, it was saying that its Trojan Mine smelter would be operational by October 2020. The company had recently been acquired by Sotic International. In 2019, it was working on a feasibility study of a $200 million Hunters Road mine, which was estimated to have 200,000 tons of nickel. In September 2019, BNC was evaluating takeover bids. In 2020, BNC was headed by CEO Batirai Manhando. Key people The chairman is Muchadeyi Masunda. Operations Mines Shangani Mine - Zimbabwe Trojan Nickel Mine - Bindura Products include nickel, copper and cobalt. Smelter The Bindura Smelter and Refinery Complex is located south of the town of Bindura. The smelter produces nickel cathodes, copper sulphide and cobalt hydroxide. See also Nickel mining and extraction Copper mining and extraction References External links Bindura Nickel Corporation @ Mwana Africa plc Copper mining companies of Zimbabwe Companies listed on the Zimbabwe Stock Exchange Mashonaland Central Province Bindura
2806697
https://en.wikipedia.org/wiki/QuteCom
QuteCom
QuteCom (previously called WengoPhone) was a free-software SIP-compliant VoIP client developed by the QuteCom (previously OpenWengo) community under the GPL-2.0-or-later license. It allows users to speak to other users of SIP-compliant VoIP software at no cost. It also allows users to call landlines and cell phones, send SMS and make video calls. None of these functions are tied to a particular provider, allowing users to choose among any SIP provider. History Development on WengoPhone began in September 2004. The first published version was released as version 0.949. A next-generation version of WengoPhone (WengoPhone NG) began development in 2005. The last release of WengoPhone before the change to QuteCom was version 2.1.2. A WengoPhone Firefox plug-in has also been published, which is currently available on Mac OS X and Microsoft Windows, with a Linux version under development. On 28 January 2008, Wengo, the original sponsor of WengoPhone, transferred the sponsorship of the project to MBDSYS, and it became known as QuteCom. On 15 May 2008, osAlliance presented kvats as a branded 2.2 version of WengoPhone/QuteCom with the preferred SIP provider A1, a brand of mobilkom Austria. As of 29 July 2016 the domain qutecom.com is registered with a domain broker. Whois seems to indicate this happened on 30 March 2016. Calls PC-to-PC calls have Hi-Fi quality and use several codecs such as iLBC, G.711 (PCMA or PCMU), G.722, AMR (license needed), AMR-WB (license needed), G.729 (license needed). It is possible to start a conversation with other users of the same software or any other software that is SIP-compliant such as Gizmo. QuteCom also allows users to make video calls using FFmpeg. Supported video codec is H.263. Since version 2.1, QuteCom allows IM chats with MSN, YIM, AIM, ICQ and XMPP users. This has been achieved by using the libpurple library. Concerning calls to landlines, the default server configuration is the one from Wengo, which was the primary sponsor of the OpenWengo project. After the release of version 2.1, QuteCom could be used with any SIP provider. This gives users an economic advantage, as they can choose the SIP provider according to how much the provider charges per minute and not according to the software they use. User interface The GUI is similar to those of other VoIP softphones such as Gizmo5 or Skype. From the main GUI, tabs allow access to the contact list, recent calls list, and user account information. Technically, it is written in Python and Qt/C++ programming languages. Features The features of QuteCom are: SIP compliance Provider agnostic Allows users to send SMS to France NAT traversal Cross-platform Audio smileys Qt-based GUI Chatting with MSN, AIM, ICQ, Yahoo and XMPP users Encryption via SRTP, but key exchange over Everbee key that is not a Standard Uses standard Session Initiation Protocol Limitations The main limitations of the QuteCom are: Lack of true privacy features such as encryption. A beta AES-128 encryption using SRTP is available since 2.1 version of QuteCom Does not support H.261 and sends wrong H.263 packets, preventing communication with other videophones Does not support audio conferences with more than three people Key Exchange for encryption is not a standard, so it works only between QuteCom clients See also Comparison of VoIP software Blink Ekiga Empathy (software) Jitsi Wengo SIP Service References 4. Binary files for Windows and source code Instant messaging clients that use Qt Voice over IP clients that use Qt Videoconferencing software that uses Qt Free instant messaging clients Free VoIP software MacOS instant messaging clients Windows instant messaging clients Mozilla plug-ins (discontinued) Videotelephony
30605506
https://en.wikipedia.org/wiki/1933%20USC%20Trojans%20football%20team
1933 USC Trojans football team
The 1933 USC Trojans football team represented the University of Southern California (USC) in the 1933 college football season. In their ninth year under head coach Howard Jones, the Trojans compiled a 10–1–1 record (4–1–1 against conference opponents), finished in third place in the Pacific Coast Conference, and outscored their opponents by a combined total of 257 to 30. Schedule Season summary Washington State Cotton Warburton 14 rushes, 221 yards References USC USC Trojans football seasons USC Trojans football
6557
https://en.wikipedia.org/wiki/Control%20unit
Control unit
The control unit (CU) is a component of a computer's central processing unit (CPU) that directs the operation of the processor. A CU typically uses a binary decoder to convert coded instructions into timing and control signals that direct the operation of the other units (memory, arithmetic logic unit and input and output devices, etc.). Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction. Multicycle control units The simplest computers use a multicycle microarchitecture. These were the earliest designs. They are still popular in the very smallest computers, such as the embedded systems that operate machinery. In a multicycle computer, the control unit often steps through the instruction cycle successively. This consists of fetching the instruction, fetching the operands, decoding the instruction, executing the instruction, and then writing the results back to memory. When the next instruction is placed in the control unit, it changes the behavior of the control unit to finish the instruction correctly. So, the bits of the instruction directly control the control unit, which in turn controls the computer. The control unit may include a binary counter to tell the control unit's logic what step it should do. Multicycle control units typically use both the rising and falling edges of their square-wave timing clock. They operate a step of their operation on each edge of the timing clock, so that a four-step operation completes in two clock cycles. This doubles the speed of the computer, given the same logic family. Many computers have two different types of unexpected events. An interrupt occurs because some type of input or output needs software attention in order to operate correctly. An exception is caused by the computer's operation. One crucial difference is that the timing of an interrupt cannot be predicted. Another is that some exceptions (e.g. a memory-not-available exception) can be caused by an instruction that needs to be restarted. Control units can be designed to handle interrupts in one of two typical ways. If a quick response is most important, a control unit is designed to abandon work to handle the interrupt. In this case, the work in process will be restarted after the last completed instruction. If the computer is to be very inexpensive, very simple, very reliable, or to get more work done, the control unit will finish the work in process before handling the interrupt. Finishing the work is inexpensive, because it needs no register to record the last finished instruction. It is simple and reliable because it has the fewest states. It also wastes the least amount of work. Exceptions can be made to operate like interrupts in very simple computers. If virtual memory is required, then a memory-not-available exception must retry the failing instruction. It is common for multicycle computers to use more cycles. Sometimes it takes longer to take a conditional jump, because the program counter has to be reloaded. Sometimes they do multiplication or division instructions by a process something like binary long multiplication and division. Very small computers might do arithmetic one or a few bits at a time. Some computers have very complex instructions that take many steps. Pipelined control units Many medium-complexity computers pipeline instructions. This design is popular because of its economy and speed. In a pipelined computer, instructions flow through the computer. This design has several stages. For example, it might have one stage for each step of the Von Neumann cycle. A pipelined computer usually has "pipeline registers" after each stage. These store the bits calculated by a stage so that the logic gates of the next stage can use the bits to do the next step. It is common for even numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This speeds the computer by a factor of two compared to single-edge designs. In a pipelined computer, the control unit arranges for the flow to start, continue, and stop as a program commands. The instruction data is usually passed in pipeline registers from one stage to the next, with a somewhat separated piece of control logic for each stage. The control unit also assures that the instruction in each stage does not harm the operation of instructions in other stages. For example, if two stages must use the same piece of data, the control logic assures that the uses are done in the correct sequence. When operating efficiently, a pipelined computer will have an instruction in each stage. It is then working on all of those instructions at the same time. It can finish about one instruction for each cycle of its clock. When a program makes a decision, and switches to a different sequence of instructions, the pipeline sometimes must discard the data in process and restart. This is called a "stall." When two instructions could interfere, sometimes the control unit must stop processing a later instruction until an earlier instruction completes. This is called a "pipeline bubble" because a part of the pipeline is not processing instructions. Pipeline bubbles can occur when two instructions operate on the same register. Interrupts and unexpected exceptions also stall the pipeline. If a pipelined computer abandons work for an interrupt, more work is lost than in a multicycle computer. Predictable exceptions do not need to stall. For example, if an exception instruction is used to enter the operating system, it does not cause a stall. Speed? For the same speed of electronic logic, it can do more instructions per second than a multicycle computer. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates. Economy? A pipelined model of a computer often has the least logic gates per instruction per second, less than either a multicycle or out-of-order computer. Why? The average stage is less complex than a multicycle computer. An out of order computer usually has large amounts of idle logic at any given instant. Similar calculations usually show that a pipelined computer uses less energy per instruction. However, a pipelined computer is usually more complex and more costly than a comparable multicycle computer. It typically has more logic gates, registers and a more complex control unit. In a like way, it might use more total energy, while using less energy per instruction. Out of order CPUs can usually do more instructions per second because they can do several instructions at once. Preventing stalls Control units use many methods to keep a pipeline full and avoid stalls. For example, even simple control units can assume that a backwards branch, to a lower-numbered, earlier instruction, is a loop, and will be repeated. So, a control unit with this design will always fill the pipeline with the backwards branch path. If a compiler can detect the most frequently-taken direction of a branch, the compiler can just produce instructions so that the most frequently taken branch is the preferred direction of branch. In a like way, a control unit might get hints from the compiler: Some computers have instructions that can encode hints from the compiler about the direction of branch. Some control units do branch prediction: A control unit keeps an electronic list of the recent branches, encoded by the address of the branch instruction. This list has a few bits for each branch to remember the direction that was taken most recently. Some control units can do speculative execution, in which a computer might have two or more pipelines, calculate both directions of a branch, then discard the calculations of the unused direction. Results from memory can become available at unpredictable times because very fast computers cache memory. That is, they copy limited amounts of memory data into very fast memory. The CPU must be designed to process at the very fast speed of the cache memory. Therefore, the CPU might stall when it must access main memory directly. In modern PCs, main memory is as much as three hundred times slower than cache. To help this, out-of-order CPUs and control units were developed to process data as it becomes available. (See next section) But what if all the calculations are complete, but the CPU is still stalled, waiting for main memory? Then, a control unit can switch to an alternative thread of execution whose data has been fetched while the thread was idle. A thread has its own program counter, a stream of instructions and a separate set of registers. Designers vary the number of threads depending on current memory technologies and the type of computer. Typical computers such as PCs and smart phones usually have control units with a few threads, just enough to keep busy with affordable memory systems. Database computers often have about twice as many threads, to keep their much larger memories busy. Graphic processing units (GPUs) usually have hundreds or thousands of threads, because they have hundreds or thousands of execution units doing repetitive graphic calculations. When a control unit permits threads, the software also has to be designed to handle them. In general-purpose CPUs like PCs and smartphones, the threads are usually made to look very like normal time-sliced processes. At most, the operating system might need some awareness of them. In GPUs, the thread scheduling usually cannot be hidden from the application software, and is often controlled with a specialized subroutine library. Out of order control units A control unit can be designed to finish what it can. If several instructions can be completed at the same time, the control unit will arrange it. So, the fastest computers can process instructions in a sequence that can vary somewhat, depending on when the operands or instruction destinations become available. Most supercomputers and many PC CPUs use this method. The exact organization of this type of control unit depends on the slowest part of the computer. When the execution of calculations is the slowest, instructions flow from memory into pieces of electronics called "issue units." An issue unit holds an instruction until both its operands and an execution unit are available. Then, the instruction and its operands are "issued" to an execution unit. The execution unit does the instruction. Then the resulting data is moved into a queue of data to be written back to memory or registers. If the computer has multiple execution units, it can usually do several instructions per clock cycle. It is common to have specialized execution units. For example, a modestly priced computer might have only one floating-point execution unit, because floating point units are expensive. The same computer might have several integer units, because these are relatively inexpensive, and can do the bulk of instructions. One kind of control unit for issuing uses an array of electronic logic, a "scoreboard"" that detects when an instruction can be issued. The "height" of the array is the number of execution units, and the "length" and "width" are each the number of sources of operands. When all the items come together, the signals from the operands and execution unit will cross. The logic at this intersection detects that the instruction can work, so the instruction is "issued" to the free execution unit. An alternative style of issuing control unit implements the Tomasulo algorithm, which reorders a hardware queue of instructions. In some sense, both styles utilize a queue. The scoreboard is an alternative way to encode and reorder a queue of instructions, and some designers call it a queue table. With some additional logic, a scoreboard can compactly combine execution reordering, register renaming and precise exceptions and interrupts. Further it can do this without the power-hungry, complex content-addressable memory used by the Tomasulo algorithm. If the execution is slower than writing the results, the memory write-back queue always has free entries. But what if the memory writes slowly? Or what if the destination register will be used by an "earlier" instruction that has not yet issued? Then the write-back step of the instruction might need to be scheduled. This is sometimes called "retiring" an instruction. In this case, there must be scheduling logic on the back end of execution units. It schedules access to the registers or memory that will get the results. Retiring logic can also be designed into an issuing scoreboard or a Tomasulo queue, by including memory or register access in the issuing logic. Out of order controllers require special design features to handle interrupts. When there are several instructions in progress, it is not clear where in the instruction stream an interrupt occurs. For input and output interrupts, almost any solution works. However, when a computer has virtual memory, an interrupt occurs to indicate that a memory access failed. This memory access must be associated with an exact instruction and an exact processor state, so that the processor's state can be saved and restored by the interrupt. A usual solution preserves copies of registers until a memory access completes. Also, out of order CPUs have even more problems with stalls from branching, because they can complete several instructions per clock cycle, and usually have many instructions in various stages of progress. So, these control units might use all of the solutions used by pipelined processors. Translating control units Some computers translate each single instruction into a sequence of simpler instructions. The advantage is that an out of order computer can be simpler in the bulk of its logic, while handling complex multi-step instructions. x86 Intel CPUs since the Pentium Pro translate complex CISC x86 instructions to more RISC-like internal micro-operations. In these, the "front" of the control unit manages the translation of instructions. Operands are not translated. The "back" of the CU is an out-of-order CPU that issues the micro-operations and operands to the execution units and data paths. Control units for low-powered computers Many modern computers have controls that minimize power usage. In battery-powered computers, such as those in cell-phones, the advantage is longer battery life. In computers with utility power, the justification is to reduce the cost of power, cooling or noise. Most modern computers use CMOS logic. CMOS wastes power in two common ways: By changing state, i.e. "active power," and by unintended leakage. The active power of a computer can be reduced by turning off control signals. Leakage current can be reduced by reducing the electrical pressure, the voltage, making the transistors with larger depletion regions or turning off the logic completely. Active power is easier to reduce because data stored in the logic is not affected. The usual method reduces the CPU's clock rate. Most computer systems use this method. It is common for a CPU to idle during the transition to avoid side-effects from the changing clock. Most computers also have a "halt" instruction. This was invented to stop non-interrupt code so that interrupt code has reliable timing. However, designers soon noticed that a halt instruction was also a good time to turn off a CPU's clock completely, reducing the CPU's active power to zero. The interrupt controller might continue to need a clock, but that usually uses much less power than the CPU. These methods are relatively easy to design, and became so common that others were invented for commercial advantage. Many modern low-power CMOS CPUs stop and start specialized execution units and bus interfaces depending on the needed instruction. Some computers even arrange the CPU's microarchitecture to use transfer-triggered multiplexers so that each instruction only utilises the exact pieces of logic needed. One common method is to spread the load to many CPUs, and turn off unused CPUs as the load reduces. The operating system's task switching logic saves the CPUs' data to memory. In some cases, one of the CPUs can be simpler and smaller, literally with fewer logic gates. So, it has low leakage, and it is the last to be turned off, and the first to be turned on. Also it then is the only CPU that requires special low-power features. A similar method is used in most PCs, which usually have an auxiliary embedded CPU that manages the power system. However, in PCs, the software is usually in the BIOS, not the operating system. Theoretically, computers at lower clock speeds could also reduce leakage by reducing the voltage of the power supply. This affects the reliability of the computer in many ways, so the engineering is expensive, and it is uncommon except in relatively expensive computers such as PCs or cellphones. Some designs can use very low leakage transistors, but these usually add cost. The depletion barriers of the transistors can be made larger to have less leakage, but this makes the transistor larger and thus both slower and more expensive. Some vendors use this technique in selected portions of an IC by constructing low leakage logic from large transistors that some processes provide for analog circuits. Some processes place the transistors above the surface of the silicon, in "fin fets", but these processes have more steps, so are more expensive. Special transistor doping materials (e.g. hafnium) can also reduce leakage, but this adds steps to the processing, making it more expensive. Some semiconductors have a larger band-gap than silicon. However, these materials and processes are currently (2020) more expensive than silicon. Managing leakage is more difficult, because before the logic can be turned-off, the data in it must be moved to some type of low-leakage storage. Some CPUs make use of a special type of flip-flop (to store a bit) that couples a fast, high-leakage storage cell to a slow, large (expensive) low-leakage cell. These two cells have separated power supplies. When the CPU enters a power saving mode (e.g. because of a halt that waits for an interrupt), data is transferred to the low-leakage cells, and the others are turned off. When the CPU leaves a low-leakage mode (e.g. because of an interrupt), the process is reversed. Older designs would copy the CPU state to memory, or even disk, sometimes with specialized software. Very simple embedded systems sometimes just restart. Integrating with the Computer All modern CPUs have control logic to attach the CPU to the rest of the computer. In modern computers, this is usually a bus controller. When an instruction reads or writes memory, the control unit either controls the bus directly, or controls a bus controller. Many modern computers use the same bus interface for memory, input and output. This is called "memory-mapped I/O". To a programmer, the registers of the I/O devices appear as numbers at specific memory addresses. x86 PCs use an older method, a separate I/O bus accessed by I/O instructions. A modern CPU also tends to include an interrupt controller. It handles interrupt signals from the system bus. The control unit is the part of the computer that responds to the interrupts. There is often a cache controller to cache memory. The cache controller and the associated cache memory is often the largest physical part of a modern, higher-performance CPU. When the memory, bus or cache is shared with other CPUs, the control logic must communicate with them to assure that no computer ever gets out-of-date old data. Many historic computers built some type of input and output directly into the control unit. For example, many historic computers had a front panel with switches and lights directly controlled by the control unit. These let a programmer directly enter a program and debug it. In later production computers, the most common use of a front panel was to enter a small bootstrap program to read the operating system from disk. This was annoying. So, front panels were replaced by bootstrap programs in read-only memory. Most PDP-8 models had a data bus designed to let I/O devices borrow the control unit's memory read and write logic. This reduced the complexity and expense of high speed I/O controllers, e.g. for disk. The Xerox Alto had a multitasking microprogammable control unit that performed almost all I/O. This design provided most of the features of a modern PC with only a tiny fraction of the electronic logic. The dual-thread computer was run by the two lowest-priority microthreads. These performed calculations whenever I/O was not required. High priority microthreads provided (in decreasing priority) video, network, disk, a periodic timer, mouse, and keyboard. The microprogram did the complex logic of the I/O device, as well as the logic to integrate the device with the computer. For the actual hardware I/O, the microprogram read and wrote shift registers for most I/O, sometimes with resistor networks and transistors to shift output voltage levels (e.g. for video). To handle outside events, the microcontroller had microinterrupts to switch threads at the end of a thread's cycle, e.g. at the end of an instruction, or after a shift-register was accessed. The microprogram could be rewritten and reinstalled, which was very useful for a research computer. Functions of the control unit Thus a program of instructions in memory will cause the CU to configure a CPU's data flows to manipulate the data correctly between instructions. This results in a computer that could run a complete program and require no human intervention to make hardware changes between instructions (as had to be done when using only punch cards for computations before stored programmed computers with CUs were invented). Hardwired control unit Hardwired control units are implemented through use of combinational logic units, featuring a finite number of gates that can generate specific results based on the instructions that were used to invoke those responses. Hardwired control units are generally faster than the microprogrammed designs. This design uses a fixed architecture—it requires changes in the wiring if the instruction set is modified or changed. It can be convenient for simple, fast computers. A controller that uses this approach can operate at high speed; however, it has little flexibility. A complex instruction set can overwhelm a designer who uses ad hoc logic design. The hardwired approach has become less popular as computers have evolved. Previously, control units for CPUs used ad hoc logic, and they were difficult to design. Microprogram control unit The idea of microprogramming was introduced by Maurice Wilkes in 1951 as an intermediate level to execute computer program instructions. Microprograms were organized as a sequence of microinstructions and stored in special control memory. The algorithm for the microprogram control unit, unlike the hardwired control unit, is usually specified by flowchart description. The main advantage of a microprogrammed control unit is the simplicity of its structure. Outputs from the controller are by microinstructions. The microprogram can be debugged and replaced very like software. Combination methods of design A popular variation on microcode is to debug the microcode using a software simulator. Then, the microcode is a table of bits. This is a logical truth table, that translates a microcode address into the control unit outputs. This truth table can be fed to a computer program that produces optimized electronic logic. The resulting control unit is almost as easy to design as microprogramming, but it has the fast speed and low number of logic elements of a hard wired control unit. The practical result resembles a Mealy machine or Richards controller. See also CPU design Computer architecture Richards controller Controller (computing) References Central processing unit Digital electronics
30958
https://en.wikipedia.org/wiki/Time-sharing
Time-sharing
In computing, time-sharing is the sharing of a computing resource among many users at the same time by means of multiprogramming and multi-tasking. Its emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing many users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications. History Batch processing The earliest computers were extremely expensive devices, and very slow in comparison to later models. Machines were typically dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs. These programs might take hours to run. As computers grew in speed, run times dropped, and soon the time taken to start up the next program became a concern. Newer batch processing software and methodologies decreased these "dead periods" by queuing up programs were developed: operating systems such as IBSYS (1960). Comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline". Programs were submitted to the operations team, which scheduled them to be run. Output (generally printed) was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer. Stanford students made a short film humorously critiquing this situation. The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This was because users might have long periods of entering code while the computer remained idle. This situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part. Time-sharing Time-sharing was developed out of the realization that while any single user would make inefficient use of a computer, a large group of users together would not. This was due to the pattern of interaction: Typically an individual user entered bursts of information followed by long pauses but a group of users working at the same time would mean that the pauses of one user would be filled by the activity of the others. Given an optimal group size, the overall process could be very efficient. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users. The concept is claimed to have been first described by John Backus in the 1954 summer session at MIT, and later by Bob Bemer in his 1957 article "How to consider a computer" in Automatic Control Magazine. In a paper published in December 1958 by W. F. Bauer, he wrote that "The computers would handle a number of problems concurrently. Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies." Christopher Strachey, who became Oxford University's first professor of computation, filed a patent application for "time-sharing" in February 1959. He gave a paper "Time Sharing in Large Fast Computers" at the first UNESCO Information Processing Conference in Paris in June that year, where he passed the concept on to J. C. R. Licklider. This paper is credited by the MIT Computation Center in 1963 as "the first paper on time-shared computers". Implementing a system able to take advantage of this was initially difficult. Batch processing was effectively a methodological development on top of the earliest systems. Since computers still ran single programs for single users at any time, the primary change with batch processing was the time delay between one program and the next. Developing a system that supported multiple users at the same time was a completely different concept. The "state" of each user and their programs would have to be kept in the machine, and then switched between quickly. This would take up computer cycles, and on the slow machines of the era this was a concern. However, as computers rapidly improved in speed, and especially in size of core memory in which users' states were retained, the overhead of time-sharing continually decreased, relatively speaking. The first project to implement time-sharing of user programs was initiated by John McCarthy at MIT in 1959, initially planned on a modified IBM 704, and later on an additionally modified IBM 709 (one of the first computers powerful enough for time-sharing). One of the deliverables of the project, known as the Compatible Time-Sharing System or CTSS, was demonstrated in November 1961. CTSS has a good claim to be the first time-sharing system and remained in use until 1973. Another contender for the first demonstrated time-sharing system was PLATO II, created by Donald Bitzer at a public demonstration at Robert Allerton Park near the University of Illinois in early 1961. But this was a special-purpose system. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for two years. JOSS began time-sharing service in January 1964. The first commercially successful time-sharing system was the Dartmouth Time Sharing System. Development Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers (centralized computing systems), which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies such as the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems. The Dartmouth Time Sharing System's creators wrote in 1968 that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer". Conversely, timesharing users thought that their terminal was the computer. With the rise of microcomputing in the early 1980s, time-sharing became less significant, because individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle. However, the Internet brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, web sites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many customers at once, usually with no perceptible communication delays, unless the servers start to get very busy. Time-sharing business Genesis In the 1960s, several companies started providing time-sharing services as service bureaus. Early systems used Teletype Model 33 KSR or ASR or Teletype Model 35 KSR or ASR machines in ASCII environments, and IBM Selectric typewriter-based terminals (especially the IBM 2741) with two different seven-bit codes. They would connect to the central computer by dial-up Bell 103A modem or acoustically coupled modems operating at 10–15 characters per second. Later terminals and modems supported 30–120 characters per second. The time-sharing system would provide a complete operating environment, including a variety of programming language processors, various software packages, file storage, bulk printing, and off-line storage. Users were charged rent for the terminal, a charge for hours of connect time, a charge for seconds of CPU time, and a charge for kilobyte-months of disk storage. Common systems used for time-sharing included the SDS 940, the PDP-10, the IBM 360, and the GE-600 series. Companies providing this service included GE's GEISCO, the IBM subsidiary The Service Bureau Corporation, Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), Bolt, Beranek, and Newman (BBN) and Time Sharing Ltd. in the UK. By 1968, there were 32 such service bureaus serving the US National Institutes of Health (NIH) alone. The Auerbach Guide to Timesharing (1973) lists 125 different timesharing services using equipment from Burroughs, CDC, DEC, HP, Honeywell, IBM, RCA, Univac, and XDS. Rise and fall In 1975, it was said about one of the major super-mini computer manufacturers that "The biggest end-user market currently is time-sharing." For DEC, for a while the second largest computer company (after IBM), this was also true: Their PDP-10 and IBM's 360/67 were widely used by commercial timesharing services such as CompuServe, On-Line Systems (OLS), Rapidata and Time Sharing Ltd. The advent of the personal computer marked the beginning of the decline of time-sharing. The economics were such that computer time went from being an expensive resource that had to be shared to being so cheap that computers could be left to sit idle for long periods in order to be available as needed. Rapidata as an example Although many time-sharing services simply closed, Rapidata held on, and became part of National Data Corporation. It was still of sufficient interest in 1982 to be the focus of "A User's Guide to Statistics Programs: The Rapidata Timesharing System". Even as revenue fell by 66% and National Data subsequently developed its own problems, attempts were made to keep this timesharing business going. UK Time Sharing Limited (TSL, 1969-1974) - launched using DEC systems. PERT was one of its popular offerings. TSL was acquired by ADP in 1974. OLS Computer Services (UK) Limited (1975-1980) - using HP & DEC systems. The computer utility Beginning in 1964, the Multics operating system was designed as a computing utility, modeled on the electrical or telephone utilities. In the 1970s, Ted Nelson's original "Xanadu" hypertext repository was envisioned as such a service. It seemed as the computer industry grew that no such consolidation of computing resources would occur as timesharing systems. In the 1990s the concept was, however, revived in somewhat modified form under the banner of cloud computing. Security Time-sharing was the first time that multiple processes, owned by different users, were running on a single machine, and these processes could interfere with one another. For example, one process might alter shared resources which another process relied on, such as a variable stored in memory. When only one user was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see. To prevent this from happening, an operating system needed to enforce a set of policies that determined which privileges each process had. For example, the operating system might deny access to a certain variable by a certain process. The first international conference on computer security in London in 1971 was primarily driven by the time-sharing industry and its customers. Notable time-sharing systems Significant early timesharing systems: Allen-Babcock RUSH (Remote Users of Shared Hardware) Time-sharing System on IBM S/360 hardware (1966) → Tymshare AT&T Bell Labs Unix (1971) → UC Berkeley BSD Unix (1977) BBN PDP-1 Time-sharing System → Massachusetts General Hospital PDP-1D → MUMPS BBN TENEX → DEC TOPS-20, Foonly FOONEX, MAXC OS at PARC, Stanford Low Overhead TimeSharing (LOTS), which ran TOPS-20 Berkeley Timesharing System at UC Berkeley Project Genie → Scientific Data Systems SDS 940 (Tymshare, BBN, SRI, Community Memory) → BCC 500 → MAXC at PARC Burroughs Time-sharing MCP → HP 3000 MPE Cambridge Multiple Access System was developed for the Titan, the prototype Atlas 2 computer built by Ferranti for the University of Cambridge. This was the first time-sharing system developed outside the United States, and which influenced the later development of UNIX. Compower Ltd., a wholly owned subsidiary of the National Coal Board (later British Coal Corporation) in the UK. Originally National Coal Board (NCB) Computer Services, it became Compower in 1973 providing computing and time-share services to internal NCB users and as a commercial service to external users. Sold to Philips C&P (Communications and Processing) in August 1994. CompuServe, also branded as Compu-Serv, CIS. Compu-Time, Inc., on Honeywell 400/4000, started in 1968 in Ft Lauderdale, Florida, moved to Daytona Beach in 1970. CDC MACE, APEX → Kronos → NOS → NOS/VE Dartmouth Time Sharing System (DTSS) → GE Time-sharing → GEnie DEC PDP-6 Time-sharing Monitor → TOPS-10 → BBN TENEX → DEC TOPS-20 DEC TSS/8 → RSTS-11, RSX-11 → OpenVMS English Electric KDF9 COTAN (Culham Online Task Activation Network) HP 2000 Time-Shared BASIC HP 3000 series IBM CALL/360, CALL/OS - using IBM System/360 Model 50 IBM CP-40 → CP-67 → CP-370 → CP/CMS → VM/CMS IBM TSO for OS/MVT → for OS/VS2 → for MVS → for z/OS IBM TSS/360 → TSS/370 ICT 1900 series GEORGE 3 MOP (Multiple Online Programming) International Timesharing Corporation on dual CDC 3300 systems. MIT CTSS → MULTICS (MIT / GE / Bell Labs) → Unix MIT Time-sharing System for the DEC PDP-1 → ITS McGill University MUSIC → IBM MUSIC/SP Michigan Terminal System, on the IBM S/360-67, S/370, and successors. Michigan State University CDC SCOPE/HUSTLER System National CSS VP/CSS, on IBM 360 series; originally based on IBM's CP/CMS. Oregon State University OS-3, on CDC 3000 series. Prime Computer PRIMOS RAND JOSS → JOSS-2 → JOSS-3 RCA TSOS → Univac / Unisys VMOS → VS/9 Service in Informatics and Analysis (SIA), on CDC 6600 Kronos. System Development Corporation Time-sharing System, on the AN/FSQ-32. Stanford ORVYL and WYLBUR, on IBM S/360-67. Stanford PDP-1 Time-sharing System → SAIL → WAITS Time Sharing Ltd. (TSL) on DEC PDP-10 systems → Automatic Data Processing (ADP), first commercial time-sharing system in Europe and first dual (fault tolerant) time-sharing system. Tone (TSO-like, for VS1), a non-IBM Time-sharing product, marketed by Tone Software Co; TSO required VS2. Tymshare SDS-940 → Tymcom X → Tymcom XX Unisys/UNIVAC 1108 EXEC 8 → OS 1100 → OS 2200 UC Berkeley CAL-TSS, on CDC 6400. XDS UTS → CP-V → Honeywell CP-6 See also Cloud computing The Heralds of Resource Sharing, a 1972 film. History of CP/CMS, IBM's virtual machine operating system (CP) that supported time-sharing (CMS). IBM M44/44X, an experimental computer system based on an IBM 7044 used to simulate multiple virtual machines. IBM System/360 Model 67, the only IBM S/360 series mainframe to support virtual memory. Multiseat configuration, multiple users on a single personal computer. Project MAC, a DARPA funded project at MIT famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. TELCOMP, an interactive, conversational programming language based on JOSS, developed by BBN in 1964. Timeline of operating systems VAX (Virtual Address eXtension), a computer architecture and family of computers developed by DEC. Utility computing Virtual memory Time-sharing system evolution References Further reading Nelson, Theodor (1974). Computer Lib: You Can and Must Understand Computers Now; Dream Machines: "New Freedoms Through Computer Screens— A Minority Report". Self-published. . pp. 56–57. : "The author relates a short history of time-sharing, the initial time-sharing experiments, the modifications of existing computers and those designed specifically for time-sharing, project MAC, significant features of the system, services, languages, programs, scope displays and light pens, and intercommunication. External links "Time Sharing Supervisor Programs", notes comparing the supervisor programs of CP-67, TSS/360, the Michigan Terminal System (MTS), and Multics by Michael T. Alexander, Advanced Topics in Systems Programming (1970, revised 1971), University of Michigan Engineering Summer Conference. "The Computer Utility As A Marketplace For Computer Services", Robert Frankston's MIT Master's Thesis, 1973. Reminiscences on the Theory of Time-Sharing by John McCarthy, 1983. Origins of timesharing by Bob Bemer. "40 years of Multics, 1969-2009", an interview with Professor Fernando J. Corbató on the history of Multics and origins of time-sharing, 2009. "Mainframe Computers: The Virtues of Sharing", Revolution: The First 2000 Years of Computing, Computer History Museum Exhibition, January 2011. "Mainframe Computers: Timesharing as a Business", Revolution: The First 2000 Years of Computing, Computer History Museum Exhibition, January 2011. Operating system technology Computer systems
54921718
https://en.wikipedia.org/wiki/Mapzen
Mapzen
Mapzen, founded in 2013 and headquartered in New York City, was an open source mapping platform company focused on the core components of geo platforms, including search (geocoding), rendering (vector tiles), navigation/routing, and data. Mapzen's components are used by OpenStreetMap, CartoDB, and Remix, amongst others. The components, hosted on GitHub, are written in JavaScript, Ruby, Java, and Python. Mapzen's CEO, Randy Meech, was previously SVP of engineering for MapQuest. Mapzen was supported by Samsung Research America and was known to have hired mapping specialists from Apple. Mapzen shut down operations in late January, 2018. On the 28th of January 2019 The Linux Foundation announced Mapzen would become a Linux Foundation Project. Projects Mapzen's hosted products were powered by open-source components, including: Pelias - a geocoder/search engine Tangram - a set of cross-platform 3D map rendering libraries Tilezen - vector map tiles based on OpenStreetMap data Valhalla - a multi-modal routing engine Transitland - an open transit data platform that aggregates GTFS feeds Who's on First - a gazetteer References External links Mapzen · an open, sustainable, and accessible mapping platform Documentation · Mapzen Web mapping Companies based in New York City Linux Foundation projects
32062896
https://en.wikipedia.org/wiki/List%20of%20Korg%20products
List of Korg products
This is a list of products manufactured by Korg Incorporated, a Japanese company that produces electronic musical instruments, audio processors and guitar pedals, recording equipment, and electronic tuners. 1960s 1963 Donca-Matic DA-20: Rhythm machine, first product. 1966 Donca-Matic DE-20: Fully electronic rhythm machine. 1967 Korg Mini Pops: Fully electronic rhythm machine. 1960s gallery 1970s 1970 Korg Prototype No.1: Synthesizer organ prototype, developed by Fumio Mieda 1972 Korg KORGUE: Synthesizer organ product 1973 Korg miniKORG 700: First Korg synthesizer 1974 Korg miniKORG 700S: 2VCO version of miniKORG 700 Korg MAXI KORG 800DV: Dual voice synthesizer 1975 Korg 900PS: Preset synthesizer Korg SB-100: Bass keyboard synthesizer Korg WT-10: World's first hand-held electronic tuner 1976 Korg PE-2000/PE-1000: Full polyphonic preset synthesizers 1977 Korg 770: Successor of Korg 700S (2VCO + Ring modulator). Korg M-500 Micro Preset: Preset synthesizer Korg PS-3100/PS-3200/PS-3300: World's first full polyphonic patchable synthesizers 1978 MS-10/MS-20/MS-50/SQ-10: MS series semi-modular synthesizer system Korg VC-10 Vocoder 1979-80 Korg Σ (Sigma), Λ (Lambda), Δ (Delta) 1970s gallery 1980s 1980 Korg CX-3: One of the first Hammond B-3 clonewheel organs. It earned especially high marks for its authentic simulation of the B-3's Leslie rotating speaker, a nearly inseparable part of the original instrument's sound. An updated model called the New CX-3 was released in 2000, and uses sample-based technology, as opposed to the original's analog emulation. Both incarnations of the instrument feature a double-manual version called the BX-3. The first-generation models also included an output for the instrument to hook up to a real Leslie speaker. Korg Trident: At the time of its release, the Trident was the flagship of Korg's lineup. It was divided into three distinct sections – polysynth, brass and strings – and featured an on-board flanger, a rarity for any synth at the time. The Trident was capable of eight notes of polyphony, and featured a 16-program memory. An upgraded version became available in 1982. 1981 The Korg Polysix is a 61-key, six-voice programmable synthesizer. It was released to compete with Roland's Juno-6 synth, and both keyboards shared similar features, such as a built-in chorus unit and an arpeggiator. However, the Polysix offered memory for patch storage, and its chorus unit was a fully-fledged analog delay unit capable of phaser and "ensemble" effects. The instrument was recreated in a virtual version, the PolysixEX for Korg's Legacy Collection, and is also available as an add-on for the OASYS synth. It is also one of the included synth engines with the Kronos line of synths. Korg Mono/Poly 1982 Korg KPR-77: Analog drum machine. Korg Poly-61: The successor of the Polysix with digitally controlled analog oscillators; Korg's first "knobless" synthesizer. Shortly before it was discontinued, a MIDI version known as the Poly-61M was released. 1983 Korg Poly-800: The first fully programmable synthesizer that sold for less than $1000, notable for using digitally controlled analog oscillators and sharing a single filter for all eight voices. The second-generation Mk II model added a digital delay section. Was also released in a module version, the EX-800. The Korg SAS-20 was Korg's first arranger keyboard. A built-in computer analyzed the melody played on the keyboard, and generated a complex accompaniment. This was the world's first auto-accompaniment function of this kind added to a keyboard. Also, a more traditional chord recognition system was included. 1984 Korg RK-100: MIDI remote keyboard/keytar 1985 Korg MR-16: PCM-based digital drum machine, with dedicated outputs for each drum voice. It has been used by Aphex Twin Korg DW-6000: Six-voice polyphonic, user selected two digital waveforms out of 8 total. Used an analog filter. Korg DW-8000: Eight-voice polyphonic, user selected two digital waveforms out of 16 total. Used an analog filter. Was also released in a rack-mount version, the EX-8000. Korg DDM-110 SuperDrums and Korg DDM-220 SuperPercussion: Low-cost digital drum machines 1986 Korg DSS-1: Korg's first sampling keyboard with two oscillators per voice (eight voices) and superb filters. Offered additive synthesis, waveform drawing and effects, with superb analog filters. Korg DDD-1: Sampling drum machine. Korg DVP-1: Vocoder, Pitch Shifter, Harmonizer, and Digital Synth Sound Module. Three-space rack unit. 1987 Korg DS-8: Expandable FM synthesizer. This synthesizer were powered by Yamaha's second-generation 4-operator FM engine Korg DSS-1 Sound Library: sound cards for Korg DSS-1 Korg DSM-1 is the rack module of DSS-1. Offered additive synthesis, waveform drawing and effects. Total: 16 voices, single oscillator, doubled RAM from DSS-1, also superb analog filters. Korg 707: Expandable FM synthesizer. This synthesizer were powered by Yamaha's second-generation 4-operator FM engine 1988 DRM-1. 16 bits. Drums station with 8 triggers and 8 individual outputs and L-R. 2 bd, 2 snare, 1 rimshot, chs, plates and congas. Korg M1: PCM sample based dual oscillator synth engine, with built-in effects, sequencer and drum machine, the M1 introduced many to the concept of a music workstation, a keyboard that could handle live performance, MIDI, sequencing, expandable sound banks, effects, and more in a single package. The best-selling synthesizer of all time (with 250,000 units sold worldwide, as a single model). Incredibly realistic sounds made possible by using rich samples of acoustic and electric instruments as initial sound source (vs.simple sine, saw and square waves used before) and applying full synthesizer processing chain (filters, modulators, effects, etc.). 1989 Korg T series (T1/T2/T3): Some improvements over the M1 with added features. 1980s gallery 1990s 1990 Korg Wavestation: Vector synthesis and advanced Wave sequencing. Co-designed by Sequential Circuits founder Dave Smith. KORG hired Dave Smith and some of his engineers when Sequential went bankrupt in 1987. Korg SoundLink SL-100C/M/S: Production quality DAW (digital audio workstation) system 1991 Korg 01/W: PCM rompler with more waveforms and effects than the M1. The 01-series was the first Korg workstation to employ their new Ai2 Synthesis engine. 01/w was produced in 4 model range: the 88-key 01/W ProX; the 61-key 01/W and its floppy disk-enabled cousin, 01/Wfd; 76-key 01/W-Pro and a rack-mount 01R/W. The series also started model naming system that lasted till the end of Triton line production, with standard model (61-key), Pro (76-key) and ProX (88-key with piano-style weighted keyboard). Korg Wavestation EX Upgraded Wavestation with additional waves (samples) and effect programs. Korg Wavestation A/D A rack version of the Wavestation EX with analog audio inputs that can be incorporated in wave sequences. Korg S3 Rhythm Workstation: 8-pad sequencing drum machine. 1992 Korg Wavestation SR A 1 unit rack version of the Wavestation, emphasizing the preset sound library. 1993 Korg X3 / Korg X2 / Korg X3R: Music Workstation Korg i3 Interactive Music Workstation: Korg introduced its first professional arranger in 1993 with the i3 model, a more professional-level arranger that used the same AI2 sound engine as Korg's pro synthesizer line. The i3 also included a multitrack MIDI sequencer in addition to the auto-accompaniment styles and arrangements, large graphical display, improved chord recognition, and the new Backing Sequence feature, which facilitated creation of new songs based on styles. 1994 Korg WAVEDRUM: DSP percussion instrument based on State Variable technology and multiple synthesis algorithms. Korg X5: 61-key, 32-voice AI2, 16-part multitimbral with General MIDI Korg 05R/W: 32-voice AI2 half-rack synth module with General Midi. Rack version of the X5. Korg i2: Korg introduced the i2, an improved i3 with a 76-note keyboard and a new piano sound. 1995 Korg i1: In 1995 a further improved version of i3 was introduced: the Korg i1, that included an 88-note weighted keyboard, a larger piano sample, and built-in speakers. Korg i4S: The i4S (where "S" stays for "Speakers"). An i3-type keyboard with a slightly reduced feature set, but with built-in speakers. Korg i5S: The i5S was a scaled-down version of the i4S, with a plastic chassis and a reduced set of features. Some new sounds and styles were added. Korg i5M: An arranger module called i5M was also introduced, with specifications similar to the i5S, but with no keyboard, amplification, or joystick. Newly added traditional styles and sounds (shared with the i5S) particularly appealed to accordionists. Korg ih: In 1995, the ih introduced the "ih Interactive Vocal Harmony" feature that allowed for creation of vocal harmonies based on the input from a microphone, starting from chords played live in Style mode, or recorded in a Song's track. 1995 Korg Prophecy: One of the first physical modeling as well as virtual analog synthesizers. The Prophecy was monophonic and featured a unique cylindrical modulation wheel with integrated ribbon controller. Korg Trinity: This very successful workstation was the first to feature a large touch-screen as part of the front panel user interface, a feature that continued on Korg's flagship pro synth and arranger lines, and even on some of their digital multitrack recorders. Korg X5D/X5DR: X5DR is the half-rack version. It is similar to an 05R/W, but with 64-note polyphony (instead of 32) and an additional set of patches. 1996 Korg N364/264: Introduced RPPR (Realtime Pattern Play/Recording) 1996-97 Korg Soundlink Digital Recording System: consists of 168RC 8-bus digital console (1996), 880 D/A & 880 A/D converter, 1212 I/O card (1997), RM8 reference monitor (designed by Boston Acoustics), Trinity Pro X (HDR option), etc . 1997 Korg Z1: The Z1 carried concepts first heard on the Prophecy further, introducing Korg's Multi-Oscillator Synthesis System (MOSS), which produced sounds via dozens of different synthesis methods, including analog modeling and physical modeling. Korg D8: The D8 was a Korg's first retail model of integrated digital recording studio package, with 16 bit @ 44.1 kHz, 8 track, and stereo digital effects. Following products were: D16 (1999), D12 (2000), D1600 (2000), D1200 (2002), D32XD/D16XD (2003), D3200 (2005), and compact D4 (2005), etc. Korg iX300: The iX300 Interactive Music Workstation was introduced with new sounds and more than 100 styles. This model did not have built-in speakers. Korg NS5R: A half-rack AI2 module with 64-note polyphony and a large LCD display. Similar to the N364, but lacking RPPR or a sequencer. 1998 Korg iS40: iS40 included new sounds (among them, a new stereo piano sample), new styles (128), and several new features. Keyboard Sets allowed for immediate recalling of keyboard track settings. Korg iS50: iS50 was the low cost version of iS40, with a slightly reduced feature set. Korg i30: The i30 Interactive Music Workstation was introduced, claiming to be the first arranger featuring a Touch Screen Display. This model was speakerless, had 64 notes of polyphony, and more sounds than the iS40. Korg TR-Rack: The TR-Rack is a 1U rack module version of the Korg Trinity. It lacks any expansion slots, but has a larger internal sample ROM than the original Trinity. Korg N5: The N5 was introduced as a keyboard version of the Korg NS5R sound module without expansion slot. Korg N1/N1R: The N1 is an 88-key (piano-action) synthesizer. It is the expanded version of the N5 with a larger sample ROM for more AI2 voices and drum kits. In addition to the Korg voices, it provides full support for GM, GS and XG. It also has a very usable arpeggiator. It provides more output ports and effects than the N5 and the built-in voice (patch and combination) editor is easier to master. The N1R is the 1U rack version. 1999 Korg Triton: Successor to the Korg Trinity, Korg's first keyboard to offer sampling since the DSS-1 from 1986. As a series Triton (Classic, Studio, Le, Extreme, TR, Karma, X50 and MicroX, all sharing common synth engine and features) sold over 300,000 units. Korg Kaoss Pad Korg Electribe: Korg's groove machines consist of acid machine, rhythm machine, and sampler, etc. Korg i40M: Korg introduced a successor to the i5M: the i40M module. Specifications were similar to the iS40 (obviously, with no keyboard or joystick), but included the Vocal Harmony feature as standard. Furthermore, the module included 3 different pre-programmed MIDI setups, to make connection with various instruments even easier. Korg iS35: iS35 was a new version of the iS40, featuring the same specifications, and adding the Vocal Harmony feature as standard. Korg iS50B: iS50B boasted the same specs as the iS50, but in a Dark Blue chassis. Korg NX5R – Successor to the NS5R half-rack model. Similar, but with an additional set of XG compatible sounds added through a daughterboard. Korg OASYS PCI – a DSP card that offered powerful and flexible audio synthesis, effects and audio. Korg SP-100: An 88 key velocity sensitive hammer-action simulation keyboard. Not much is known but manual can be found at Manualslib.com 1990s gallery 2000s 2000 Korg CX-3: Not to be confused with Korg's CX-3 from 1979. This digital modeling organ added MIDI and many new features. Korg MS2000 Analog modeling synthesizer. Korg Triton Rack 2U Rackmount version of the Triton. Korg Pa80: A new range of arranger from Korg was introduced in year 2000: the Pa Series. Pa80 was the first model introduced in December 2000 with the same engine as Korg's Triton series, a wide selection of highly musical Styles, a Multitasking Operating System and a Dual Sequencer design. 2001 Korg KARMA Kay Algorithmic Realtime Music Architecture, developed by Stephen Kay, a kind of arpeggiator that was more dynamic, organic, elastic and musical than previous forms. Korg Triton Studio Featuring an onboard CD-R drive Korg KM Mixers: The KM-2 (Kaoss Mixer) is a DJ Mixer, with a built-in update of the original Kaoss Pad, plus a sampler. Korg Pandora PXR4 digital multitrack recorder. 2002 Korg Pa60: Similar to the Pa80, but with a reduced feature set (lacking sampling and Harmony Board compatibility). Korg MicroKorg: A compact analog modeling synthesizer with built-in vocoder. Korg Triton LE 2003 Korg Pa1X Pro: The flagship arranger of a new pro arranger line, which marked Korg's return to professional arrangers without built-in speakers. It also marked the beginning of a factive cooperation with the studio DSP manufacturer TC-Electronic. Korg MS2000B: new version of the MS2000 synthesizer with updated sound set, black metallic color scheme and dedicated vocoder mic; Korg MS2000BR: rack-mount version Korg microKONTROL: portable MIDI keyboard controller 2004 Korg Legacy Collection: Includes software emulations of three famous Korg synthesizers: the MS-20, Polysix, and the Wavestation Korg Pa1X: A shorter-scale version of the Pa1X, but with built-in speakers. Korg Pa50: An affordable professional arranger synth with most of the features of the more expensive Pa60. Korg Kaoss Pad KP2, an improved re-release of the original Kaoss Pad. Korg Kaoss Pad Entrancer, an audio & visual processor version of Kaoss Pad. Korg Triton Extreme: The successor to the Triton, which added "Valve Force" circuitry, a real vacuum tube circuit. Nicknamed "Russian Bullet," these tubes are rumored to last a minimum of 10 years. The Triton Extreme also featured a dramatically increased ROM size: 160 MB, featuring 32 MB of all new acoustic samples. 2005 Korg OASYS (Open Architecture Synthesis Studio workstation) 2006 Korg Legacy Digital Edition: Includes software emulations of the Korg M1 and Wavestation synthesizers Korg TR: enhanced Triton Le music workstation Korg RADIAS Korg PadKontrol drum-trigger style MIDI controller Korg D888 8-track digital recorder Korg Kaoss Pad 3 Korg MicroX compact X50, half sounds from the TR, half new, with the X50's software capability Korg X50 A stripped-down Korg TR with no sequencer but a software-linking editor librarian Korg Pa 800 Successor of the award-winning Pa 80, but boosted with features like in the Pa1X Pro 2007 Korg M3 newest flagship workstation, diverging from the famous Korg Triton line, often called a "mini-Korg OASYS" Korg R3 A portable version of the RADIAS synthesizer. Korg mini-KP – At 4.25" x 4.5", this smallest installment of the Kaoss series products packs all the punch of its larger brethren and offers both battery and AC power. Korg ZERO Mixers – Console style (Zero8) and DJ style (Zero4) mixers. Each incorporate a multi-channel FireWire audio interface and full DSP with a customizable MIDI control surface. Interfacing and performing with all types of software become seamless. Both mixers had Traktor Scratch Certification. Korg KM Mixers – KM202 and KM404 are Korg's 2 and 4 channel DJ Mixers. They featured the full Korg MiniKP interface and effects, which can be applied to selected channels. 8 different EQ models (including full cut isolator), selectable by a large dial on the panel, were another unique feature. Korg Kaossilator – Compact, handheld dynamic phrase synthesizer that features 100 programs including acoustic, percussion, and electronic sounds, a gate arpeggiator, 31 scale types ranging from Chromatic and Blues to Egyptian and Gypsy, and an 8 layer 8-step sequencer for producing loop-based music. Following in the footsteps of Korg's KP technology, it features a touch pad where the horizontal axis varies in pitch and the vertical in tone. Released January 2008 in the US. Korg Pa2X Pro – Successor to Korg's previous flagship professional arranger keyboard, the Pa2X Pro featured the Double MP3 Player/Recorder, the ability to slow down and transpose MP3 files, improved 76-key keybed, a tiltable touch screen, phantom power, balanced in/out, digital audio output, and internal clock. Toneworks-Guitar Effects and processors AX10A – Modeling Signal Processor for Acoustic Guitar AX1000G – Modeling Signal Processor for Acoustic Guitar AX1500G – Modeling Signal Processor for Guitar AX3000B – Modeling Signal Processor for Bass AX3000G – Modeling Signal Processor for Guitar(The only model in AX series still being produced) AX3A – Modeling Signal Processor AX3B – Modeling Signal Processor AX3G – Modeling Signal Processor AX5B – Modeling Signal Processor for Bass AX5G – Modeling Signal Processor for Guitar PX4A – Pandora: Acoustic Personal Multi-Effect Processor PX4D – Pandora: Personal Multi-Effect Processor 2008 Korg DS-10 – Music program for the Nintendo DS. Korg M50 – Music workstation Korg Nano Series – Slim-line controllers (nanoPad, nanoKey and nanoKontrol) (Used by Distortion in the studio) Korg Pa500 – After the success of Pa50, the Pa500 was introduced, with a completely redesigned user interface. considerably improving in the interface design. Korg Pa588 – During year 2008, Korg introduced Pa588, a cross-over of an arranger (the acclaimed Pa500) and a digital stage piano, with the 88-note graded-weighted RH3 keyboard, built-in speakers, and a piano sample. It came with piano stand included, and featured Pa-Series compatibility. 2009 Korg microKORG XL – An updated microKORG featuring the MMT (Multi Modeling Technology) sound engine as well as effects processors from their KAOSS line products. Korg microSampler – A mini key dedicated sampler. Korg Pa50SD – The Pa50 lost the old floppy disk in favour of a SD Card media. Korg SV-1 – Retro looking stage piano – available in 73 or 88 key versions. 2000s gallery 2010s 2010 Korg Kaossilator Pro – An updated version of the KO-1, including external sampling, midi control, SD card and USB support contained in a bulkier, KP3-esque chassis. Korg PS60 – Performance Synthesizer designed for the gigging musician. Korg Microstation – Continuing with the "micro" series Korg releases a workstation with the traditional mini keys. Korg Monotron – A small analogue ribbon synthesizer that is capable of running audio through it using on board filters based on the Korg MS-20. Korg MP10Pro – Professional Media Player Korg iElectribe – A touch screen version of the KORG Electribe made for Apple's iPad. Korg microKEY – A compact midi controller featuring the same keys used on the microKORG XL. Korg AW2U – A dedicated clip on ukulele tuner. Korg MR2 – High resolution mobile recorder. Korg MicroMetro – A tiny compact metronome that doesn't compromise on quality. Korg SP170 – cheapest and smallest piano to date Korg SOS (Sound on Sound) – A completely self-contained unlimited track recorder. Korg iMS-20 – Like the iElectribe, a digital touch screen version of the Korg MS-20 analog synth made for the Apple iPad. 2011 Korg Kronos – New synthesizer-workstation, a successor of the Korg OASYS. Korg Wavedrum Oriental – A middle-eastern, Arabian styled version of the Wavedrum. Korg PA3X – Professional Arranger Workstation. Korg KAOSS PAD Quad – A new version of the KAOSS PAD that allows the user to have four simultaneous effects being used at once. Korg nanoSERIES 2 – New and improved versions of the nanoKEY, nanoPAD and nanoKONTROL were released providing exactly the same function as their predecessors. Korg Monotribe – An advanced version of the Monotron containing more features that are usually referenced to the Electribe series therefore dubbing its name. Korg Wavedrum Mini – A more compact version of the popular Wavedrum, containing new features like a built in speaker for 'on-the-go' use. Korg MMA130 – A powered mobile monitoring amp designed to be portable yet with no loss in quality. Korg Pitch-clip – A clip on tuner. Korg microARRANGER – A complete arranger keyboard with built in speakers and with KORG's signature miniature keys used on the microSTATION for example. Korg Monotron Duo – A development on the Monotron Classic featuring two square wave VCOs, X-Mod and a VCF. Korg Monotron Delay – A development on the Monotron Classic featuring one saw tooth VCO an LFO with two wave shapes, a VCF and delay. 2012 Korg Kaossilator 2 – The successor to the Kaossilator and the Kaossilator Pro. Introduced at NAMM 2012, the Kaossilator 2 features a redesigned pocket sized body and a small OEL display. Korg Kronos X – An expanded version of the already powerful Kronos workstation. Korg microKEY 25 and 61 – Alternative sized options to the portable midi controller introduced after the original model. Korg TM-50 – A tuner and metronome (hence 'TM') combi model. Korg Krome – A mid-price workstation with sounds derived from the Kronos. Colour Options – Alternative colour versions were made available for some of Korg's existing products including the microKORG, microKORG XL and microKEY. Korg microKORG XL+ - An updated version of the popular microKORG XL containing new and classic sounds from Korg's previous keyboards. Korg PA600 and PA600QT – Mid-price arranger keyboards.PA600QT is quarter tone version with oriental program. Korg TMR-50 – Portable tuner, metronome and recorder. Korg PA55TR- Oriental version of the PA50 which was only released in Turkey. 2013 Korg MS-20 mini – A faithful analog recreation of the original MS-20 in a slightly smaller physical package. Korg KingKORG – A 61-key virtual analog synthesizer Korg SP-280 – Digital piano Korg Wavedrum Global Edition – Improved usability, new soundsets and better sounding existing soundsets Korg KP3+ – Updated features and effects based on the original KP3 Korg Kaossilator Pro+ - Additional sounds including new drum kits as well as the original sets KR mini Korg Rhythm – compact rhythm machine with a built-in speaker, battery powered Korg TM-50C – Combo – Tuner Metronome plus a contact microphone Korg pitchblack – Polyphonic tuner Korg CM-200 – Contact microphone designed especially for tuners Korg HeadTune – Clip on tuner for Guitar / Clip on tuner for Bass / Clip on tuner for Ukulele Korg M01D – Music program for Nintendo 3DS Korg PA900 – Arranger. Korg Volca Series (Volca Keys, Volca Bass, Volca Beats) – True-analog synthesizers with built-in sequencers. Korg Kross – A mobile synthesizer keyboard / Music Workstation. Korg Pandora Stomp – A "stompbox" version of the Korg Pandora Mini multi-effects unit for guitar & bass, also incorporating a Korg pitchblack tuner. Korg MS-20 Kit - a full-size replica of the classic Korg MS-20, including selectable filter circuitry from both revisions of the original model. Assembly required. 2014 Korg Pa300 – Entry Level Compact Sized Professional Arranger. Korg Gadget – iPad DAW app. Korg Pa3X Le – A professional arranger. Korg Electribe (sampler) – A music production station. Korg Mini Kaoss Pad 2S – A dynamic effects processor. Korg Volca Sample – A digital sample sequencer. Korg RK-100S – MIDI remote keyboard (keytar) Korg Kronos 2 – Update to the Korg Kronos/Kronos X. Korg DSN-12 – music creation software for the Nintendo 3DS developed with Japanese developer Detune 2015 ARP Odyssey duophonic synthesizer – reissue of ARP Odyssey at 86% of the original’s size. Korg SQ-1 step sequencer – compact 2x8 step sequencer. Korg MS-20M Kit + SQ-1 step sequencer – monophonic synthesizer module kit – desktop version of Korg MS-20 Kit, with new features including: oscillator sync, FM, PWM, early/later-type filter (switchable), and CV/Gate interface supporting various specs (Hz/V, V/Oct, S-Trig, V-Trig), etc. Also SQ-1 is bundled. Korg Kaoss DJ – DJ controller. Korg Kaossilator 2S – Dynamic phrase synthesizer. Korg Havian 30 – Home piano with arranger. Korg PA4X – KORG's flagship Professional Arranger Workstation. 2016 Korg Pa600MY – KORG's official Malaysia localised version of PA600 with Malay, Chinese and Indian instrument sounds and musical styles Korg Minilogue – A programmable four-voice polyphonic analog synthesizer with a built in sequencer and delay effect. Korg Volca FM – three-voice polyphonic digital FM synthesizer. Korg PA4X ORIENTAL – KORG's flagship Professional Arranger Workstation, from Western to Oriental: Arabic, Persian, Turkish – the Oriental version delivers. The Oriental version comes with a bonus package that includes musical resources from Middle East libraries. Korg Volca Kick – Analog kick generator. Korg Kronos 2 Platinum music workstation. 2017 Korg Kronos 2 Gold music workstation. Korg Krome Platinum music workstation. Korg Monologue – monophonic analog synthesizer. Korg Grandstage – multiple engines stage piano. Korg PA1000 – Arranger. Korg PA700 – Arranger. 2018 prologue - polyphonic analog synthesizer. Korg EK 50 - Entertainer Keyboard Korg Volca Drum. Korg Volca Modular. Korg Volca Mix. Korg Minilogue XD. 2019 Korg Kronos SE. Korg Krome EX. Korg Volca Nubass. Korg Nu:Tekt. 2010s gallery 2020s 2020 Korg Wavestate – wave sequencing synthesizer, a tribute to the popular Korg Wavestation. Korg i3 – entry-level arranger workstation. Korg EK 50 Limitless - Entertainer Keyboard Korg SV-2 – significant update to the stage piano (SV-1) which first appeared in 2009 – available in 73 or 88 key versions. Korg opsix - altered FM-synthesizer with 3-octave keyboard. Operators can do FM, Ring Mod, Filter FM, as well as act as either a filter or wavefolder Korg ARP 2600FS - semi-modular synthesizer, a reproduction of their ARP 2600 synthesizer from the 70s. Korg RK-100S v2 - update to the new version from 2014 of the popular keytar from the 80s. Korg Kronos 2 Titanium (Limited Edition) music workstation Korg Nautilus (61, 73 and 88 key) music workstation - A lower cost version of the Kronos. 2021 Korg Modwave – Follow up to the DW-8000 based around wavetable synthesis and motion sequencing. Korg miniKORG 700FS – Full-size reproduction of the original 700S with additional modern features. Korg ARP 2600 M – Smaller, keyless version of the ARP 2600 recreation released in 2020. Korg SQ-64 – Polyphonic hardware sequencer Korg LP-380-U – Digital piano Korg GM-1 – Group metronome References External links Korg official home page Korg Museum by Korg Korg Page at Synthmuseum.com Korg Page at Wikizic.org Korg Kornucopia – Korg analogue synthesizer information, manuals and resources Information on Korg's analogue vintage instruments Korg Monotron & Monotribe patches, sounds and videos Audio interview with Mitch Colby (EVP / CMO of Korg USA)  (related topic on KorgForum.com) Korg products
307145
https://en.wikipedia.org/wiki/Two%27s%20complement
Two's complement
Two's complement is a mathematical operation on binary numbers, and is an example of a radix complement. It is used in computing as a method of signed number representation. When the Most Significant Bit is a one, the number is signed as negative. (See below -'Converting from Two's Complement representation'). The two's complement of an -bit number is defined as its complement with respect to ; the sum of a number and its two's complement is . For instance, for the three-bit number , the two's complement is , because which is equal to . The two's complement is calculated by inverting the bits and adding one. Two's complement is the most common method of representing signed integers on computers, and more generally, fixed point binary values. In this scheme, if the binary number 0112 encodes the signed integer 310, then its two's complement, 1012, encodes the inverse: −310. In other words, the sign of most integers (all but one of them) can be reversed in this scheme by taking the two's complement of its binary representation. The tables at right illustrate this property. Compared to other systems for representing signed numbers (e.g., ones' complement), two's complement has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits as the output, and any overflow beyond those bits is discarded from the result). This property makes the system simpler to implement, especially for higher-precision arithmetic. Unlike ones' complement systems, two's complement has no representation for negative zero, and thus does not suffer from its associated difficulties. Conveniently, another way of finding the two's complement of a number is to take its ones' complement and add one: the sum of a number and its ones' complement is all '1' bits, or ; and by definition, the sum of a number and its two's complement is . History The method of complements had long been used to perform subtraction in decimal adding machines and mechanical calculators. John von Neumann suggested use of two's complement binary representation in his 1945 First Draft of a Report on the EDVAC proposal for an electronic stored-program digital computer. The 1949 EDSAC, which was inspired by the First Draft, used two's complement representation of binary numbers. Many early computers, including the CDC 6600, the LINC, the PDP-1, and the UNIVAC 1107, use ones' complement notation; the descendants of the UNIVAC 1107, the UNIVAC 1100/2200 series, continued to do so. The IBM 700/7000 series scientific machines use sign/magnitude notation, except for the index registers which are two's complement. Early commercial two's complement computers include the Digital Equipment Corporation PDP-5 and the 1963 PDP-6. The System/360, introduced in 1964 by IBM, then the dominant player in the computer industry, made two's complement the most widely used binary representation in the computer industry. The first minicomputer, the PDP-8 introduced in 1965, uses two's complement arithmetic as do the 1969 Data General Nova, the 1970 PDP-11, and almost all subsequent minicomputers and microcomputers. Converting from two's complement representation A two's-complement number system encodes positive and negative numbers in a binary number representation. The weight of each bit is a power of two, except for the most significant bit, whose weight is the negative of the corresponding power of two. The value  of an -bit integer is given by the following formula: The most significant bit determines the sign of the number and is sometimes called the sign bit. Unlike in sign-and-magnitude representation, the sign bit also has the weight shown above. Using bits, all integers from to can be represented. Converting to two's complement representation In two's complement notation, a non-negative number is represented by its ordinary binary representation; in this case, the most significant bit is 0. Though, the range of numbers represented is not the same as with unsigned binary numbers. For example, an 8-bit unsigned number can represent the values 0 to 255 (11111111). However a two's complement 8-bit number can only represent positive integers from 0 to 127 (01111111), because the rest of the bit combinations with the most significant bit as '1' represent the negative integers −1 to −128. The two's complement operation is the additive inverse operation, so negative numbers are represented by the two's complement of the absolute value. From the ones' complement To get the two's complement of a negative binary number, the bits are inverted, or "flipped", by using the bitwise NOT operation; the value of 1 is then added to the resulting value, ignoring the overflow which occurs when taking the two's complement of 0. For example, using 1 byte (=8 bits), the decimal number 5 is represented by 0000 01012 The most significant bit is 0, so the pattern represents a non-negative value. To convert to −5 in two's-complement notation, first, the bits are inverted, that is: 0 becomes 1 and 1 becomes 0: 1111 10102 At this point, the representation is the ones' complement of the decimal value −5. To obtain the two's complement, 1 is added to the result, giving: 1111 10112 The result is a signed binary number representing the decimal value −5 in two's-complement form. The most significant bit is 1, so the value represented is negative. The two's complement of a negative number is the corresponding positive value, except in the special case of the most negative number. For example, inverting the bits of −5 (above) gives: 0000 01002 And adding one gives the final value: 0000 01012 Likewise, the two's complement of zero is zero: inverting gives all ones, and adding one changes the ones back to zeros (since the overflow is ignored). The two's complement of the most negative number representable (e.g. a one as the most-significant bit and all other bits zero) is itself. Hence, there is an 'extra' negative number for which two's complement does not give the negation, see below. Subtraction from 2N The sum of a number and its ones' complement is an -bit word with all 1 bits, which is (reading as an unsigned binary number) . Then adding a number to its two's complement results in the lowest bits set to 0 and the carry bit 1, where the latter has the weight (reading it as an unsigned binary number) of . Hence, in the unsigned binary arithmetic the value of two's-complement negative number of a positive satisfies the equality . For example, to find the 4-bit representation of −5 (subscripts denote the base of the representation): therefore Hence, with : The calculation can be done entirely in base 10, converting to base 2 at the end: Working from LSB towards MSB A shortcut to manually convert a binary number into its two's complement is to start at the least significant bit (LSB), and copy all the zeros, working from LSB toward the most significant bit (MSB) until the first 1 is reached; then copy that 1, and flip all the remaining bits (Leave the MSB as a 1 if the initial number was in sign-and-magnitude representation). This shortcut allows a person to convert a number to its two's complement without first forming its ones' complement. For example: in two's complement representation, the negation of "0011 1100" is "1100 0100", where the underlined digits were unchanged by the copying operation (while the rest of the digits were flipped). In computer circuitry, this method is no faster than the "complement and add one" method; both methods require working sequentially from right to left, propagating logic changes. The method of complementing and adding one can be sped up by a standard carry look-ahead adder circuit; the LSB towards MSB method can be sped up by a similar logic transformation. Sign extension When turning a two's-complement number with a certain number of bits into one with more bits (e.g., when copying from a 1-byte variable to a 2-byte variable), the most-significant bit must be repeated in all the extra bits. Some processors do this in a single instruction; on other processors, a conditional must be used followed by code to set the relevant bits or bytes. Similarly, when a two's-complement number is shifted to the right, the most-significant bit, which contains magnitude and the sign information, must be maintained. However, when shifted to the left, a 0 is shifted in. These rules preserve the common semantics that left shifts multiply the number by two and right shifts divide the number by two. Both shifting and doubling the precision are important for some multiplication algorithms. Note that unlike addition and subtraction, width extension and right shifting are done differently for signed and unsigned numbers. Most negative number With only one exception, starting with any number in two's-complement representation, if all the bits are flipped and 1 added, the two's-complement representation of the negative of that number is obtained. Positive 12 becomes negative 12, positive 5 becomes negative 5, zero becomes zero(+overflow), etc. Taking the two's complement of the minimum number in the range will not have the desired effect of negating the number. For example, the two's complement of −128 in an 8 bit system is −128. Although the expected result from negating −128 is +128, there is no representation of +128 with an 8 bit two's complement system and thus it is in fact impossible to represent the negation. Note that the two's complement being the same number is detected as an overflow condition since there was a carry into but not out of the most-significant bit. Mathematically, this is complementary to the fact that the negative of 0 is again 0. For a given number of bits there is an even number of binary numbers 2, taking negatives is a group action (of the group of order 2) on binary numbers, and since the orbit of zero has order 1, at least one other number must have an orbit of order 1 for the orders of the orbits to add up to the order of the set. Thus some other number must be invariant under taking negatives (formally, by the orbit-stabilizer theorem). Geometrically, one can view the -bit binary numbers as the cyclic group , which can be visualized as a circle (or properly a regular 2-gon), and taking negatives is a reflection, which fixes the elements of order dividing 2: 0 and the opposite point, or visually the zenith and nadir. The presence of the most negative number can lead to unexpected programming bugs where the result has an unexpected sign, or leads to an unexpected overflow exception, or leads to completely strange behaviors. For example, the unary negation operator may not change the sign of a nonzero number. e.g., −(−128) → −128. an implementation of absolute value may return a negative number. e.g., abs(−128) → −128. Likewise, multiplication by −1 may fail to function as expected. e.g., (−128) × (−1) → −128. Division by −1 may cause an exception (like that caused by dividing by 0). Even calculating the remainder (or modulo) by −1 can trigger this exception. e.g., (−128) ÷ (−1) → crash, (−128) % (−1) → crash. In the C and C++ programming languages, the above behaviours are undefined and not only may they return strange results, but the compiler is free to assume that the programmer has ensured that undefined computations never happen, and make inferences from that assumption. This enables a number of optimizations, but also leads to a number of strange bugs in such undefined programs. The most negative number in two's complement is sometimes called "the weird number", because it is the only exception. Although the number is an exception, it is a valid number in regular two's complement systems. All arithmetic operations work with it both as an operand and (unless there was an overflow) a result. Why it works Given a set of all possible -bit values, we can assign the lower (by the binary value) half to be the integers from 0 to inclusive and the upper half to be to −1 inclusive. The upper half (again, by the binary value) can be used to represent negative integers from to −1 because, under addition modulo they behave the same way as those negative integers. That is to say that because any value in the set can be used in place of . For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from the top half (128 to 255) yields the signed bytes −128 to −1. The relationship to two's complement is realised by noting that , and is the ones' complement of . Example In this subsection, decimal numbers are suffixed with a decimal point "." For example, an 8 bit number can only represent every integer from −128. to 127., inclusive, since . is equivalent to 161. since −95. + 256. = −95. + 255. + 1 = 255. − 95. + 1 = 160. + 1. = 161. 1111 1111 255. − 0101 1111 − 95. =========== ===== 1010 0000 (ones' complement) 160. + 1 + 1 =========== ===== 1010 0001 (two's complement) 161. Fundamentally, the system represents negative integers by counting backward and wrapping around. The boundary between positive and negative numbers is arbitrary, but by convention all negative numbers have a left-most bit (most significant bit) of one. Therefore, the most positive 4 bit number is 0111 (7.) and the most negative is 1000 (−8.). Because of the use of the left-most bit as the sign bit, the absolute value of the most negative number (|−8.| = 8.) is too large to represent. Negating a two's complement number is simple: Invert all the bits and add one to the result. For example, negating 1111, we get . Therefore, 1111 in binary must represent −1 in decimal. The system is useful in simplifying the implementation of arithmetic on computer hardware. Adding 0011 (3.) to 1111 (−1.) at first seems to give the incorrect answer of 10010. However, the hardware can simply ignore the left-most bit to give the correct answer of 0010 (2.). Overflow checks still must exist to catch operations such as summing 0100 and 0100. The system therefore allows addition of negative operands without a subtraction circuit or a circuit that detects the sign of a number. Moreover, that addition circuit can also perform subtraction by taking the two's complement of a number (see below), which only requires an additional cycle or its own adder circuit. To perform this, the circuit merely operates as if there were an extra left-most bit of 1. Arithmetic operations Addition Adding twos-complement numbers requires no special processing even if the operands have opposite signs: the sign of the result is determined automatically. For example, adding 15 and −5: 11111 111 (carry) 0000 1111 (15) + 1111 1011 (−5) =========== 0000 1010 (10) Or the computation of 5 − 15 = 5 + (−15): 1 (carry) 0000 0101 ( 5) + 1111 0001 (−15) =========== 1111 0110 (−10) This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent) 9th most significant bit is ignored, resulting in the arithmetically correct result of 1010. The last two bits of the carry row (reading right-to-left) contain vital information: whether the calculation resulted in an arithmetic overflow, a number too large for the binary system to represent (in this case greater than 8 bits). An overflow condition exists when these last two bits are different from one another. As mentioned above, the sign of the number is encoded in the MSB of the result. In other terms, if the left two carry bits (the ones on the far left of the top row in these examples) are both 1s or both 0s, the result is valid; if the left two carry bits are "1 0" or "0 1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can quickly determine if an overflow condition exists. As an example, consider the signed 4-bit addition of 7 and 3: 0111 (carry) 0111 (7) + 0011 (3) ====== 1010 (−6) invalid! In this case, the far left two (MSB) carry bits are "01", which means there was a two's-complement addition overflow. That is, 10102 = 1010 is outside the permitted range of −8 to 7. The result would be correct if treated as unsigned integer. In general, any two -bit numbers may be added without overflow, by first sign-extending both of them to bits, and then adding as above. The bits result is large enough to represent any possible sum ( two's complement can represent values in the range −16 to 15) so overflow will never occur. It is then possible, if desired, to 'truncate' the result back to bits while preserving the value if and only if the discarded bit is a proper sign extension of the retained result bits. This provides another method of detecting overflow—which is equivalent to the method of comparing the carry bits—but which may be easier to implement in some situations, because it does not require access to the internals of the addition. Subtraction Computers usually use the method of complements to implement subtraction. Using complements for subtraction is closely related to using complements for representing negative numbers, since the combination allows all signs of operands and results; direct subtraction works with two's-complement numbers as well. Like addition, the advantage of using two's complement is the elimination of examining the signs of the operands to determine whether addition or subtraction is needed. For example, subtracting −5 from 15 is really adding 5 to 15, but this is hidden by the two's-complement representation: 11110 000 (borrow) 0000 1111 (15) − 1111 1011 (−5) =========== 0001 0100 (20) Overflow is detected the same way as for addition, by examining the two leftmost (most significant) bits of the borrows; overflow has occurred if they are different. Another example is a subtraction operation where the result is negative: 15 − 35 = −20: 11100 000 (borrow) 0000 1111 (15) − 0010 0011 (35) =========== 1110 1100 (−20) As for addition, overflow in subtraction may be avoided (or detected after the operation) by first sign-extending both inputs by an extra bit. Multiplication The product of two -bit numbers requires bits to contain all possible values. If the precision of the two operands using two's complement is doubled before the multiplication, direct multiplication (discarding any excess bits beyond that precision) will provide the correct result. For example, take . First, the precision is extended from four bits to eight. Then the numbers are multiplied, discarding the bits beyond the eighth bit (as shown by ""): 00000110 (6) * 11111011 (−5) ============ 110 1100 00000 110000 1100000 11000000 x10000000 + xx00000000 ============ xx11100010 This is very inefficient; by doubling the precision ahead of time, all additions must be double-precision and at least twice as many partial products are needed than for the more efficient algorithms actually implemented in computers. Some multiplication algorithms are designed for two's complement, notably Booth's multiplication algorithm. Methods for multiplying sign-magnitude numbers don't work with two's-complement numbers without adaptation. There isn't usually a problem when the multiplicand (the one being repeatedly added to form the product) is negative; the issue is setting the initial bits of the product correctly when the multiplier is negative. Two methods for adapting algorithms to handle two's-complement numbers are common: First check to see if the multiplier is negative. If so, negate (i.e., take the two's complement of) both operands before multiplying. The multiplier will then be positive so the algorithm will work. Because both operands are negated, the result will still have the correct sign. Subtract the partial product resulting from the MSB (pseudo sign bit) instead of adding it like the other partial products. This method requires the multiplicand's sign bit to be extended by one position, being preserved during the shift right actions. As an example of the second method, take the common add-and-shift algorithm for multiplication. Instead of shifting partial products to the left as is done with pencil and paper, the accumulated product is shifted right, into a second register that will eventually hold the least significant half of the product. Since the least significant bits are not changed once they are calculated, the additions can be single precision, accumulating in the register that will eventually hold the most significant half of the product. In the following example, again multiplying 6 by −5, the two registers and the extended sign bit are separated by "|": 0 0110 (6) (multiplicand with extended sign bit) × 1011 (−5) (multiplier) =|====|==== 0|0110|0000 (first partial product (rightmost bit is 1)) 0|0011|0000 (shift right, preserving extended sign bit) 0|1001|0000 (add second partial product (next bit is 1)) 0|0100|1000 (shift right, preserving extended sign bit) 0|0100|1000 (add third partial product: 0 so no change) 0|0010|0100 (shift right, preserving extended sign bit) 1|1100|0100 (subtract last partial product since it's from sign bit) 1|1110|0010 (shift right, preserving extended sign bit) |1110|0010 (discard extended sign bit, giving the final answer, −30) Comparison (ordering) Comparison is often implemented with a dummy subtraction, where the flags in the computer's status register are checked, but the main result is ignored. The zero flag indicates if two values compared equal. If the exclusive-or of the sign and overflow flags is 1, the subtraction result was less than zero, otherwise the result was zero or greater. These checks are often implemented in computers in conditional branch instructions. Unsigned binary numbers can be ordered by a simple lexicographic ordering, where the bit value 0 is defined as less than the bit value 1. For two's complement values, the meaning of the most significant bit is reversed (i.e. 1 is less than 0). The following algorithm (for an -bit two's complement architecture) sets the result register R to −1 if A < B, to +1 if A > B, and to 0 if A and B are equal: // reversed comparison of the sign bit if A(n-1) == 0 and B(n-1) == 1 then return +1 else if A(n-1) == 1 and B(n-1) == 0 then return -1 end // comparison of remaining bits for i = n-2...0 do if A(i) == 0 and B(i) == 1 then return -1 else if A(i) == 1 and B(i) == 0 then return +1 end end return 0 Two's complement and 2-adic numbers In a classic HAKMEM published by the MIT AI Lab in 1972, Bill Gosper noted that whether or not a machine's internal representation was two's-complement could be determined by summing the successive powers of two. In a flight of fancy, he noted that the result of doing this algebraically indicated that "algebra is run on a machine (the universe) which is two's-complement." Gosper's end conclusion is not necessarily meant to be taken seriously, and it is akin to a mathematical joke. The critical step is "...110 = ...111 − 1", i.e., "2X = X − 1", and thus X = ...111 = −1. This presupposes a method by which an infinite string of 1s is considered a number, which requires an extension of the finite place-value concepts in elementary arithmetic. It is meaningful either as part of a two's-complement notation for all integers, as a typical 2-adic number, or even as one of the generalized sums defined for the divergent series of real numbers 1 + 2 + 4 + 8 + ···. Digital arithmetic circuits, idealized to operate with infinite (extending to positive powers of 2) bit strings, produce 2-adic addition and multiplication compatible with two's complement representation. Continuity of binary arithmetical and bitwise operations in 2-adic metric also has some use in cryptography. Fraction conversion To convert a number with a fractional part, such as .0101, one must convert starting from right to left the 1s to decimal as in a normal conversion. In this example 0101 is equal to 5 in decimal. Each digit after the floating point represents a fraction where the denominator is a multiplier of 2. So, the first is 1/2, the second is 1/4 and so on. Having already calculated the decimal value as mentioned above, only the denominator of the LSB (LSB = starting from right) is used. The final result of this conversion is 5/16. For instance, having the floating value of .0110 for this method to work, one should not consider the last 0 from the right. Hence, instead of calculating the decimal value for 0110, we calculate the value 011, which is 3 in decimal (by leaving the 0 in the end, the result would have been 6, together with the denominator 24 = 16, which reduces to 3/8). The denominator is 8, giving a final result if 3/8. See also Division algorithm, including restoring and non-restoring division in two's-complement representations Offset binary p-adic number Method of complements, generalisation to other number bases, used on mechanical calculators References Further reading Two's Complement Explanation, (Thomas Finley, 2000) External links Two's complement array multiplier JavaScript simulator Binary arithmetic Articles with example Python (programming language) code
3174450
https://en.wikipedia.org/wiki/PQS%20%28software%29
PQS (software)
PQS is a general purpose quantum chemistry program. Its roots go back to the first ab initio gradient program developed in Professor Peter Pulay's group but now it is developed and distributed commercially by Parallel Quantum Solutions. There is a reduction in cost for academic users and a site license. Its strong points are geometry optimization, NMR chemical shift calculations, and large MP2 calculations, and high parallel efficiency on computing clusters. It includes many other capabilities including Density functional theory, the semiempirical methods, MINDO/3, MNDO, AM1 and PM3, Molecular mechanics using the SYBYL 5.0 Force Field, the quantum mechanics/molecular mechanics mixed method using the ONIOM method, natural bond orbital (NBO) analysis and COSMO solvation models. Recently, a highly efficient parallel CCSD(T) code for closed shell systems has been developed. This code includes many other post Hartree–Fock methods: MP2, MP3, MP4, CISD, CEPA, QCISD and so on. History The origin of PQS program was developed by Meyer and Pulay in the late 1960s. They both were at the Max-Planck Institute for Physics and Astrophysics in Munich when they began to write a new ab initio program. The main purpose was to establish new ab initio techniques. Pulay and Meyer had slightly different interests. Pulay was interested in implementing gradient geometry optimization, analytical energy derivatives (force), and force constant calculations via the numerical differentiation of analytical forces, while Meyer enthused about the coupled-electron pair approximation (CEPA), spin density calculation, and extremely accurate correlation methods like pseudonatural orbital-configuration interaction (PNO-CI). At that time, analytical gradients were limited to closed-shell Hartree-Fock wavefunctions. However, they were able to do it for unrestricted (UHF) and restricted open-shell (ROHF) methods in 1970. The first version of the code was completed in 1969 at Max-Planck institute and University of Stuttgart. Then, Meyer named it “MOLPRO” and used Gaussian lobe basis sets. In the 1970s, the current version of MOLPRO added a number of advanced methods such as multiconfiguration self-consistent field (MC-SCF) and internally contracted multireference configuration interaction (MR-CI). Simultaneously, in the 1980s, MOLPRO was extended and mostly rewritten by Hans-Joachim Werner, Peter Knowles and Meyer's coworkers. Meanwhile, in 1976, Pulay was visiting Boggs at the University of Texas, Austin and Schaefer at the University of California. They wrote a new program called TEXAS based on the original MOLPRO and replaced Gaussian lobe functions with the standard Gaussian functions. TEXAS emphasized large molecules, SCF convergence, geometry optimization techniques, and vibrational spectroscopy-related calculations. From 1982 onward, the program was further developed at the University of Arkansas. The primary significant expansion was the usage of a few new electron correlation methods by Saebo and a first- order MC-SCF program by Hamilton. A critical option was the implementation of the first practical gauge-invariant atomic orbital (GIAO) NMR program by Wolinski, who additionally included a highly efficient integral package. Bofill executed an unhindered natural orbital-complete active space (UNO-CAS) program including analytical gradients; this is a minimal-cost alternative to MC-SCF and works just as well in most cases. TEXAS was initially parallelized in 1995–1996 on a cluster of 10 IBM RS6000 workstations. In 1996, Baker joined Pulay and, around the same time, Intel brought out the Pentium Pro, a processor for PC that was competitive with low-end workstations and less costly by around an order of magnitude. Understanding the capability of this improvement for computational chemistry, PQS was formed and a SBIR grant application was submitted in July 1997 for the commercial development of PC clusters for parallel ab initio calculations. In the meantime, the Pulay group, financed by a National Science Foundation grant, set about constructing a Linux cluster using 300 MHz Pentium II processors. By a fortunate circumstance, a few capable and PC proficient graduate students were available, specifically Magyarfalvi and Shirel. The PC cluster was a complete success, and significantly outperformed the IBM Workstation cluster that was the group's computational mainstay at a small amount of its expense. The PQS programming was demonstrated on the TEXAS code and parts of it, chiefly the NMR code, were authorized to PQS from the University of Arkansas. Much of the code was widely changed to comply with the twin points of (a) having all major functionality fully parallel; and (b) having the capacity to routinely perform calculations on extensive systems. They aimed primarily for a modest level of parallelism (from 8 to 32 CPUs), as this is the most widely recognized size for an individual or group resource. Indeed, even on very large clusters it is normal for any given user to be allocated only a percentage of the available processors. Features The basis capabilities in high-level correlated energies for PQS ab initio v. 4.0 include MP3, MP4, CID, CISD, CEPA-0, CEPA-2, QCISD, QCISD(T), CCD, CCSD and CCSD(T) wavefunctions; enforced geometry optimization (used, among other things, to simulate the results of Atomic Force Microscopy (AFM) experiments); full-accuracy, canonical UMP2 energies and analytical polarizabilities and hyperpolarizabilities for HF and DFT wavefunctions. An efficient vectorized Gaussian integral package allowing high angular momentum basis functions and general contractions. Abelian point group symmetry throughout; utilizes full point group symmetry (up to Ih) for geometry optimization step and Hessian (2nd derivative) CPHF. Closed-shell (RHF) and open-shell (UHF) SCF energies and gradients, including several initial wavefunction guess options. Improved SCF convergence for open-shell systems. Closed-shell (RHF) and open-shell (UHF) density functional energies and gradients including all popular exchange-correlation functionals: VWN, B88, OPTX, LYP, P86, PW91, PBE, B97, HCTH, B3LYP, make up your own functional etc. Fast and accurate pure DFT energies and gradients for large basis sets using the Fourier Transform Coulomb (FTC) method. Productive, adaptable geometry optimization for every one of these methods including Eigenvector Following (EF) algorithm for minimization and saddle-point search, Pulay's GDIIS algorithm for minimization, use of Cartesian, Z-matrix and delocalized internal coordinates. Includes new coordinates for efficient optimization of molecular clusters and adsorption/reaction on model surfaces. Full range of geometrical constraints including fixed distances, planar bends, torsions and out-of-plane bends between any atoms in the molecule and frozen (fixed) atoms. Atoms involved in constraints do not need to be formally bonded and - unlike with a Z matrix - desired constraints do not need to be satisfied in the starting geometry. Expository second subsidiaries for every one of these systems, including the calculation of vibrational frequencies, IR intensities and thermodynamic analysis. Efficient NMR Chemical Shifts for closed-shell HF and DFT wavefunctions. A full range of effective core potentials (ECPs), both relativistic and non-relativistic, with energies, gradients, analytical second derivatives and NMR. Closed-shell MP2 energies and analytical gradients and dual-basis MP2 energies; numerical MP2 second derivatives. Potential scan, including scan + optimization of all other degrees of freedom. Reaction Path (IRC) following using either Z-matrix, Cartesian or mass-weighted Cartesian coordinates. Conductor-like screening solvation model (COSMO) including energies, analytical gradients, numerical second derivatives and NMR. Population analysis, including bond orders and atomic valencies (with free valencies for open-shell systems); CHELP and Cioslowski charges. Weinhold's Natural Bond Order (NBO) analysis, including natural population and steric analysis. Properties module with charge, spin-density and electric field gradient at the nucleus. Polarizabilities and dipole and polarizability derivatives; Raman intensities. Full Semiempirical package, both open (unrestricted) and closed-shell energies and gradients, including MINDO/3, MNDO, AM1 and PM3. For the latter, all main group elements through the fourth row (except the noble gases) as well as Zinc and Cadmium, have been parametrized. Molecular Mechanics using the Sybyl 5.2 and UFF Force Fields. QM/MM using the ONIOM method. Molecular dynamics using the simple Verlet algorithm. Pople-style input for quick input generation and compatibility with other programs. Graphical input generation and display All major ab initio functionality is fully parallel (except MP2 gradients which is serial only - parallel version under development). Calculate molecular structure and vibrational spectra for transition state, infrared (IR), Raman and Vibrational circular dichroism (VCD). See also References External links Computational chemistry software Chemistry software for Linux Science software that uses Qt Proprietary commercial software for Linux Proprietary software that uses Qt
22573559
https://en.wikipedia.org/wiki/Confederate%20Stamp%20Alliance
Confederate Stamp Alliance
The Confederate Stamp Alliance is a philatelic organization dedicated to the collection and study of postage stamps and postal history of the Confederate States of America (CSA). It is an affiliate (No. 73) of the American Philatelic Society. History The Alliance was considered as a suggestion of Dr. Marye Y. Dabney to August Dietz Sr. in 1934 and established in 1935 under the cognizance of the famous CSA philatelist August Dietz, and by 1937 had already 85 active members. Membership Membership is available to all collectors and students of postage stamps and postal history of the Confederate States of America. Application for membership may be accomplished on the Alliance’s website. CSA conventions The Alliance conducts an annual CSA Philatelic Convention and Exhibition held at various national stamp shows across the country, allowing CSA stamp collectors to meet and discuss their hobby, and to view exhibits of CSA postal history. Authentication services The Alliance provides authentication services for postal material of the CSA and issues Certificates of Authentication for stamps and covers found to be authentic. The service was founded and organized October 1, 1945, as the CSA Authentication Committee. It current has ten voting members, including the Recording Secretary, and a stable of consultants with expertise in specialty areas. It was renamed the CSA Authentication Service in 2006. An important reference for the authentication service is the Freeland-Hill-Gallagher Reference Collection—a collection of Confederate fakes, forgeries, and fantasies formed by the late Rev. Paul B. Freeland. It was purchased for and donated to the Alliance in 1977 by members John R. Hill Jr. and D. Scott Gallagher. Confederate Philatelist The official publication of the organization is its awards winning journal the Confederate Philatelist which is published quarterly. Previous publications of the organization were the Confederate Bulletin, from 1940 to 1952, and the Confederate Stamp Album, from 1956 to 1959, with The Confederate Philatelist starting publication in 1960. CSA Catalog project In 1929, August Dietz Sr. published The Postal Service of the Confederate States of America. In 1931, Dietz published the first actual catalog that bore his name, a small volume of 320 pages that was followed up with a supplement of 80 pages in 1932. Subsequent editions were issued in 1937, 1945, 1959 and 1986. In 2006, the CSA acquired the rights to the Dietz catalogs and in 2012 published the Confederate States of America Catalog and Handbook of Stamps and Postal History (edited by Patricia A. Kaufmann, Francis J. Crown. Jr. and Jerry S. Palazolo). It is the lineal descendant of the Dietz catalogs. See also Postage stamps and postal history of the Confederate States References The Confederate Stamp Alliance Philatelic organizations based in the United States
6795633
https://en.wikipedia.org/wiki/Lin%20Yi-bing
Lin Yi-bing
Lin Yi-bing or Jason Lin () ( Born: October 26, 1961 ) is a Taiwanese academic who has served as the Chair Professor of the Department of Computer Science and Information Engineering (CSIE) at National Chiao Tung University (NCTU) since 1995, and since 2002, the Chair Professor of the Department of Computer Science and Information Management (CSIM), at Providence University, a Catholic university in Taiwan. He also serves as Vice President of the National Chiao Tung University. Brief biography Lin entered the National Cheng Kung University in 1980 and graduated with a Bachelor of Science in Electrical Engineering (BSEE) in 1983. In 1985, he undertook a doctorate program at the University of Washington (Advisor: Ed Lazowska), and graduated with a Ph.D. in Computer Science in 1990. His research interests include personal communications, mobile computing, intelligent network signaling, computer telephony integration, and parallel simulation. He has developed an Internet of Things (IoT) platform called IoTtalk. This platform has been used for sustainable applications including AgriTalk for intelligent agriculture, EduTalk for intelligent education, CampusTalk for intelligent university campus, and so on. Career chronology 1983 - 1985: Second Lieutenant Instructor, Communication and Electronics School of Chinese Army, Taiwan, R.O.C. 1990 - 1995: Research Scientist, Applied Research Area, Bell Communications Research, Morristown, New Jersey 1995 - 1996: TRB Review Committee Member for Telecommunication Laboratories (TL), Chunghwa Telecom Co., Ltd. 1995–present: Professor, Department of Computer Science and Information Engineering, National Chiao Tung University 1996: Deputy Director, Microelectronics and Information Systems Research Center, (MIRC), NCTU 1996 - 1997: Consultant, Computer & Communication Research Laboratories, Industrial Technology Research Institute (CCL/ITRI) 1997 - 1999: Chairman, Department of Computer Science and Information Engineering, National Chiao Tung University 1999–present: Adjunct Research Fellow, Academia Sinica 2002–present: Chair Professor, Department of Computer Science and Information Management, Providence University, Shalu, Taiwan 2004 - 2006: Dean, Office of Research and Development, National Chiao Tung University 2006 - 2011: Dean, College of Computer Science, National Chiao Tung University Member of the International Advisory Board, Alpine Research and Development Lab for Networks and Telematics, University of Trento, Italy. 2009–present: Member, Board of Directors, Chunghwa Telecommunications 2011–present: Vice President, National Chiao Tung University Source: Chiao Tung University webpage Publications Lin is the co-author of three books Wireless and Mobile Network Architecture (co-author with Imrich Chlamtac; published by John Wiley, 2001), Wireless and Mobile All-IP Networks (John Wiley, 2005), and Charging for Mobile All-IP Telecommunications (John Wiley, 2008). Chiou, T., Tsai, S., & Lin, Y. (2014). Network security management with traffic pattern clustering. SOFT COMPUTING. Yang, S., Lin, Y., Gan, C., Lin, Y., & Wu, C. (2014). Multi-link Mechanism for Heterogeneous Radio Networks. Wireless Personal Communications. Lin, Y., Liou, R., Sung, Y. Coral, & Cheng, P. (2014). Performance Evaluation of LTE eSRVCC with Limited Access Transfers. IEEE Transactions on Wireless Communications. Lin, P., & Lin, Y. (2014). An IP-Based Packet Test Environment for TD-LTE and LTE FDD. IEEE Communications Magazine. Hung, H., Lin, Y., & Luo, C. (2014). Deriving the distributions for the numbers of short message arrivals. Wireless Communications and Mobile Computing. Liou, R., Lin, Y., Chang, Y., Hung, H., Peng, N., & Chang, M. (2013). Deriving the Vehicle Speeds from a Mobile Telecommunications Network. IEEE Transactions on Intelligent Transportation Systems. Lin, Y., Liou, R., Chen, Y., & Wu, Z. (2013). Automatic event-triggered call-forwarding mechanism for mobile phones. Wireless Communications and Mobile Computing. Fu, H., Lin, P., & Lin, Y. (2013). Reducing Signaling Overhead for Femtocell/Macrocell Networks. IEEE Transactions on Mobile Computing. Sanchez-Esguevillas, A., Carro, B., Camarillo, G., Lin, Y., Garcia-Martin, M. A., & Hanzo, L. (2013). IMS: The New Generation of Internet-Protocol-Based Multimedia Services. Proceedings of the IEEE. Yang, S., Cheng, W., Hsu, Y., Gan, C., & Lin, Y. (2013). Charge scheduling of electric vehicles in highways. Mathematical and Computer Modeling. Lin, Y., Huang-Fu, C., & Alrajeh, N. (2013). Predicting Human Movement Based on Telecom's Handoff in Mobile Networks. IEEE Transactions on Mobile Computing. Lin, Y., Lin, P., Sung, Y. Coral, Chen, Y., Chen, W., Alrajeh, N., Lin, B. Paul, & Gan, C. (2013). Performance Measurements of TD-LTE, Wimax and 3G Systems. IEEE Wireless Communications. Chuang, C., Lin, Y., & Ren, Z. Julie (2013). chapter preloading mechanism for e-reader in mobile environment. Information Sciences. Yang, S., Wang, H., Gan, C., & Lin, Y. (2013). Mobile charging information management for smart grid networks. International Journal of Information Technology. Liou, R., & Lin, Y. (2013). Mobility management with the central-based location area policy. Computer Networks''. 鄭., Cheng, P., 林., 陳., Lin, Y., & Chen, R. (2013). 長期演進技術之加強單一無線語音通話連續性的限制通話轉移次數研究. 林., Lin, P., 林., 陳., Lin, Y., & Chen, W. (2013). 行動電信網路之IP封包量測. 劉., Liou, R., 林., & Lin, Y. (2013). LTE 移動管理及其對通話控制影響之研究. 羅., Luo, C., 林., 蘇., Lin, Y., & Sou, S. (2013). 簡訊傳送模型之研究. 吳., Wu, C., 林., & Lin, Y. (2013). 以多重無線存取技術強化高速列車無線傳輸之研究. Awards 1997, 1999 and 2001 Distinguished Research Awards from National Science Council, ROC 1998 Outstanding Youth Electrical Engineer Award from CIEE, ROC 2003 IEEE Fellow 2003 ACM Fellow 2004 AAAS Fellow 2004 K.-T. Li Outstanding Award 2005 IET/IEE Fellow 2005 Pan WY Distinguished Research Award 2005 Teco Award 2005 Medal of Information, IICM 2006 Best Impact Award, IEEE Taipei Section 2006 ISI Highly Cited Scholar (Author Publication Number: A0096-206-L) 2006 Academic Publication Award of The Sun Yat-Sen Cultural Foundation 2006 Academic Award of the Ministry of Education 2007 KT Hou Honored Award 2007 HP Technology for Teaching Higher Education Grant Award 2007 YZ Hsu Technology Cathedra Award 2008 Award for Outstanding contributions in Science and Technology, Executive Yuen, ROC. 2009 IBM Shared University Research Award 2010 IBM Faculty Award 2010 IEEE Region 10 Academia-Industry Partnership Award 2010 IEEE Vehicular Technology Society "Top Associate Editor" 2011 TWAS Prize in Engineering Sciences 2011 National Chair Award, Ministry of Education, ROC. References External links Jason Yi-Bing Lin's home page. Yi-Bing Lin's blog National Chiao Tung University faculty Taiwanese computer scientists Fellows of the Association for Computing Machinery Living people Fellow Members of the IEEE TWAS laureates Ministers of Science and Technology of the Republic of China 1961 births
22927673
https://en.wikipedia.org/wiki/After%20the%20Software%20Wars
After the Software Wars
After the Software Wars is a book by Keith Curtis about free software and its importance in the computing industry, specifically about its impact on Microsoft and the proprietary software development model. The book is about the power of mass collaboration and possibilities of reaching up to a singular rationale showing successful collaborative examples in open source such as Linux and Wikipedia. Keith Curtis attended the University of Michigan, but dropped out to work as a programmer for Microsoft after meeting Bill Gates in 1993. He worked there for 11 years, and then left after he found he was bored. He then wrote and self-published After the Software Wars to explain the caliber of free and open source software and why he believes Linux is technically superior to any proprietary operating system. References External links 2009 non-fiction books Books about free software Software development philosophies Microsoft Works about the information economy
521939
https://en.wikipedia.org/wiki/Sneakernet
Sneakernet
Sneakernet, also called sneaker net, is an informal term for the transfer of electronic information by physically moving media such as magnetic tape, floppy disks, optical discs, USB flash drives or external hard drives between computers, rather than transmitting it over a computer network. The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to walking in sneakers as the transport mechanism. Alternative terms may be floppy net, train net, pigeon net, tennis shoe net. Summary and background Sneakernets are in use throughout the computer world. Sneakernet may be used when computer networks are prohibitively expensive for the owner to maintain, in high-security environments where manual inspection (for re-classification of information) is necessary, where information needs to be shared between networks with different levels of security clearance, when data transfer is impractical due to bandwidth limitations, when a particular system is simply incompatible with the local network, unable to be connected, or when two systems are not on the same network at the same time. Because Sneakernets take advantage of physical media, security measures used for the transfer of sensitive information are respectively physical. This form of data transfer is also used for peer-to-peer (or friend-to-friend) file sharing and has grown in popularity in metropolitan areas and college communities. The ease of this system has been facilitated by the availability of USB external hard drives, USB flash drives and portable music players. The United States Postal Service offers a Media Mail service for compact discs, among other items. This provides a viable mode of transport for long distance Sneakernet use. In fact, when mailing media with sufficiently high data density such as high capacity hard drives, the throughput (data transferred per unit of time) as well as the cost per unit of data transferred may compete favorably with networked methods of data transfer. Usage examples Afghanistan In 2021 Taliban-governed Afghanistan, "computer kars" distribute Internet-derived content by hand: "Movies, music, mobile applications, iOS updates, and naughty videos. Also creating Apple IDs and social media accounts, and backing up and unlocking phones and recovering data." The kars collectively maintain an archive of hundreds of terabytes of data. Four terabytes of the latest Indian or American movies or Turkish TV dramas, dubbed in the Afghan national languages Dari and Pashto reportedly wholesale for about 800 afghanis, or nine US dollars, while the retail price of five gigabytes of content is 100 afghanis, or one US dollar. Kars report that their earnings have dropped 90% under Taliban rule. Australia When Australia joined Usenet in 1983, it received articles via tapes sent from the United States to the University of Sydney, which disseminated data to dozens of other computers on the country's Unix network. Bhutan The Rigsum Sherig Collection project uses a sneakernet to distribute offline educational resources, including Kiwix and Khan Academy on a Stick, to hundreds of schools and other educational institutional in the Kingdom of Bhutan. Many of the schools in Bhutan have computers or IT labs, but no Internet connection (or a very slow one). The sneakernet, facilitated by teachers, distributes about 25 GB of free, open-source educational software to the schools, often using external hard disks. Cuba El Paquete Semanal is a roughly 1TB compilation of media, distributed weekly throughout Cuba via portable hard drives and USB memory sticks. North Korea North Korean dissidents have been known to smuggle flash drives filled with western movies and television shows, largely in an effort to inspire a cultural revolution. Pakistan The May 2011 raid of Osama bin Laden's compound in Abbottabad, Pakistan, revealed that he used a series of USB thumb drives to store his email drafts. A courier of his would then take the saved emails to a nearby Internet cafe and send them out to the desired recipients. South Africa In September 2009, Durban company Unlimited IT reportedly pitted a messenger pigeon against South African ISP Telkom to transfer 4 GB of data from Howick to Durban. The pigeon, carrying the data on a memory stick, arrived in one hour eight minutes, with the data taking another hour to read from the memory stick. During the same two-hour period, only about 4.2% of the data had been transferred over the ADSL link. A similar experiment was conducted in England in September 2010; the "pigeonnet" also proved superior. In November 2009 the Australian comedy/current-affairs television program Hungry Beast repeated this experiment. The experiment had the team transfer a 700 MB file via three delivery methods to determine which was the fastest; A carrier pigeon with a microSD card, a car carrying a USB Stick, or a Telstra ADSL line. The data was to be transferred a distance of 132 km by road. The pigeon won the race with a time of approximately 1 hour 5 minutes, the car came in second at 2 hours 10 minutes, while the internet transfer did not finish, having dropped out a second time and not come back. Wizzy Digital Courier provided Internet access to schools in South Africa with poor or no network connectivity by implementing UUCP on USB memory sticks. This allowed offline cached email transport and scoops of web pages to back-fill a web cache. United States Google has used a sneakernet to transport large datasets, such as the 120 TB of data from the Hubble Space Telescope. Users of Google Cloud can import their data into Google Cloud Storage through sneakernet. Oracle similarly offers its Data Transfer Service to customers to migrate data to Oracle Cloud Infrastructure or export data from it. The SETI@home project uses a sneakernet to overcome bandwidth limitations: data recorded by the radio telescope in Arecibo, Puerto Rico was stored on magnetic tapes which were then shipped to Berkeley, California, for processing. In 2005, Jim Gray reported sending hard drives and even "metal boxes with processors" to transport large amounts of data by postal mail. Very Long Baseline Interferometry performed using the Very Long Baseline Array ships hard drives to a data reduction site in Socorro, New Mexico. They refer to their data transfer mechanism as "HDOA" (Hard Drives On Airplane). Data analytics teams in the financial services sector often use sneakernets to transfer sensitive corporate information and information obtained from data mining, such as ledger entries, customer data and financial statistics. There are several reasons for this: firstly, sneakernets can generally provide very high security (and possibly more importantly, they are perceived to be secure) due to the impossibility of a man-in-the-middle attack or packet sniffing; secondly, the volumes of data concerned are often extremely high; and thirdly, setting up secure network links between the client business and the analytics team's facilities is often either impossible or an extremely convoluted process. In 2015 Amazon Web Services launched AWS Snowball, a , 50 TB device for transporting data to the AWS cloud; and in 2016 AWS Snowmobile, a truck to transport up to 100 PB of data in one load. For similar reasons, there is also a Google Transfer Appliance and an IBM Cloud Mass Data Migration device. Observation data from the Event Horizon Telescope is collected on hard drives which are transported by commercial freight airplanes from the various telescopes to the MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy, where the data is analyzed. USSR In later USSR, the operating system called DEMOS was created and adapted for many types of Soviet computers by cloning versions of UNIX that were brought into USSR on magnetic tapes bypassing the Iron Curtain. This allowed to build Relcom country-wide X.25 network to provide global Usenet access for Soviet users which led to the registration of .su ("Soviet Union") top level domain in 1990. In media Non-fiction The first USENET citation is July 16, 1985 and it was widely considered an old joke already. Other alleged speakers included Tom Reidel, Warren Jackson, or Bob Sutterfield. Although the station wagon transporting magnetic tapes is generally considered the canonical version, variants using trucks or Boeing 747s or C-5s and later storage technologies such as CD-ROMs, DVDs, Blu-rays, or SD Cards have frequently appeared. The very first problem in Andrew S. Tanenbaum's 1981 textbook Computer Networks asks the student to calculate the throughput of a St. Bernard carrying floppy disks. Fiction The Terry Pratchett novel Going Postal (2004) includes a contest between a horse-drawn mail coach and the "Grand Trunk Clacks" (a semaphore line) to see which is faster to transmit the contents of a book to a remote destination. William Gibson's novel Spook Country (2007) also features sneakernets, with iPods being the storage device used to clandestinely move information. In Cory Doctorow's novel Little Brother, the main character uses the term sneakernet to describe how he and his friends distribute the fictitious XNet software for encrypted communications. The "valuable data file" has become a common MacGuffin in action films and television programs (the motif of the "valuable letter or documents" (pre-electronic information storage technology) dates back hundreds of years). The film Johnny Mnemonic (1995), based on the short story by William Gibson, stars Keanu Reeves as a digital courier with 320 GB of corporate data transported in his head. The film Live Free or Die Hard (2007) depicts a digital thief attempting to download 500 TB of financial data to a suitcase-sized package. In the episode "Amen" of The Newsroom (2012), associate producer Maggie Jordan, after being told "it's 2011, we don't run film manually", is told that the video is taking too long to render on the local machine. It cannot feasibly be transferred over the network in raw form and must be run to the newsroom manually on a USB drive. In the film Elysium (2013), produced, written, and directed by Neill Blomkamp, Matt Damon as Max Da Costa downloads sensitive data into his brain implant in a data heist. In the film Snowden (2016), the titular character is seen evading security by carrying a data chip concealed in a Rubik's Cube. In the James Bond film Skyfall (2012), MI6 sets up a sneakernet due to security concerns. A large portion of the game Fallout: New Vegas involves a courier recovering and delivering a stolen casino chip containing part of a program for an anti missile defense system. Similar concepts Delay-tolerant networks, such as the Haggle project at Cambridge University. IP over Avian Carriers (RFC 1149), an April Fools' Day RFC describing the transmission of messages via homing pigeon. See also Air gap (networking) Darknet Data Mule Jargon File Meatspace Pod slurping Sideloading Twilight (warez) USB dead drop References Computer jargon Computer networking Data transmission File sharing networks 1980s neologisms
4975577
https://en.wikipedia.org/wiki/Computer%20City
Computer City
Computer City was a chain of United States-based computer superstores operated by Tandy Corporation; the retailer was sold to CompUSA in 1998 and was merged into the CompUSA organization. Computer City was a supercenter concept featuring name-brand and private label computers, software and related products; at the height of its success the company had over 101 locations in the United States and five in Europe. History In 1981, the original Computer City was founded by Leonard and Myrna Simon in Costa Mesa, California. Len Simon sat on the original Apple Retail Council while Myrna was in charge of HR. Within the first year, Computer City had added stores in Brea and Pasadena, CA and with the help of managers Mike Mostyn, Gordon Klatt and Greg Gadbois, Computer City expanded to San Diego, Beverly Hills, Encino, Cerritos, and Torrance CA. Computer City was the first independent Los Angeles computer retailers to offer the original IBM 5150 PC along with Sears and ComputerLand. Computer City was acquired in 1983 by Rick and Joe Inatome and now known as Inacomp became the second largest computer retailer in the US with sales over $500M / year in computer products. By 1985, market conditions in computer retailing had changed. As computers were less of a mystery to more people, profit margins began to drop. Retailers who offered business-to-business consultative services to sell computer systems could no longer afford expensive salespeople. Taking the name of the Los Angeles retailer they had purchased two years earlier, Rick, Vee, and Joe Inatome gave rise to the first big-box merchandising concept – Computer City. With an investment from Mitsubishi, Joe leveraged his vendor relationships at Inacomp to bring IBM, Apple, and Compaq to their first big-box merchandised store, initially privately held by Inatome and Mitsubishi. Innovative retail practices Computer City innovated a number of retail concepts that are now common retail practices. First begun at the Costa Mesa Incomp, the store hosted a professional service bureau called The Graphic Zone, that provided film and graphic services for the nascent desktop publishing industry, the store operated a cafe which served coffee and sandwiches to prolong shopping visits, and the store featured a product training center that included an Electrosonic Video Wall, with 16 32" monitors which served as digital signage for the store, when training wasn't in session. The store also made heavy use of vendor managed inventory, vendor shops, and CO-OP funded retail displays which are now common practice throughout the retail industry. Acquisition by Tandy Ultimately, Tandy bought the Computer City concept and store in 1991 and launched Computer City as a national chain (as well as Incredible Universe). Alan Bush, a Radio Shack executive, was named president of the new company. The stores resembled CompUSA's super center concepts, but lacked the financial backing CompUSA had. CompUSA, having a larger market share, bought the company, and in the process, shut down one of its smaller competitors. Two types of store models existed, one was a full size store with an in house Tandy Repair Center similar to a freestanding Tandy Repair Center (later RadioShack Service Centers) that continued to serve RadioShack stores. These stores had sub departments for business sales that would just handle business orders for companies and other organizations, they also offered in store customer training classes for software such as Microsoft Excel, Word, and Powerpoint. Some of these locations were as big as some Best Buy Stores. They also operated Computer City Express stores which had no service center in them, nor did they offer classes. They were closer in size to a large RadioShack store. Computer City was recognized as the 2nd fastest retailer to hit $1 Billion in sales in 1995 and in 1996 was recognized as the 2nd fastest retailer to hit $2 Billion in sales. (Sam's Club was the fastest retailer to hit $1 Billion.) Alan Bush and Jim Hamilton Vice President GMM (known as the "Father of Computer Retailing), were the strategists behind the rapid growth and success. In its Westbury/Garden City New York location, Computer City opened right next to its main competitor CompUSA. In King Of Prussia, PA, Computer City was directly across the street from CompUSA, and was in plain sight from CompUSA's main entrance. However, Computer City's entrance opened toward an off-street parking lot. As both Computer City and Tandy's other venture Incredible Universe were both having financial issues, the computer departments of Incredible Universe were changed to Computer City. The Westbury, NY Incredible Universe was also within 3 miles of the above-mentioned CompUSA and Computer City. This may have hurt both of these Tandy divisions. One hallmark of Computer City's retail concept is that the store operated much like a grocery store; customers could not only browse, but select and purchase almost all merchandise without the assistance of a salesperson. Furthermore, until mid-1996, the floor staff did not have revenue quotas and were not paid on commission, though bonuses were applied for selling either the extended warranty Computer City Service Plan (CCSP) or in-store training classes. In retrospect, this model was seen as creating a competitive disadvantage, as computers were still new to many customers in those days and a lack of qualified and knowledgeable salespeople, who had no incentive to self-improve, led to frustrated customers and high return rates. In late 1996. Computer City announced they would be closing 21 stores. Acquisition by CompUSA On June 22, 1998, CompUSA announced that it was purchasing the Computer City chain for US$275 million. Upon completion of the takeover in September 1998, CompUSA shuttered fifty-one of the stores outright, and transitioned the remainders into CompUSA locations. In some states, warranties on items that had been purchased at Computer City were taken over by CompUSA, in other states they were taken over by Radio Shack. In Canada, Computer City Canada stores were sold by CompUSA to Future Shop who, in the end, liquidated all Canadian Computer City locations. Current uses of the name There was an active store in Bermuda by the same name; it was independently founded in January 2000 and closed in December 2012. It was never related to the US or Canadian stores. Also the web address http://computercity.com/ is running but has no relation to the US or Canadian stores. Until around 2015 there were stores in Denmark (10 stores) and Sweden (2 stores) using the original logo. There are also active stores by this name in other parts of the world. See also PC Club References American companies established in 1981 American companies disestablished in 1998 Companies based in Orange County, California Computer companies established in 1981 Computer companies disestablished in 1998 Consumer electronics retailers in the United States Defunct computer companies of the United States Defunct retail companies of the United States RadioShack Retail companies disestablished in 1998 Retail companies established in 1981
1695464
https://en.wikipedia.org/wiki/Point%20coordination%20function
Point coordination function
Point coordination function (PCF) is a media access control (MAC) technique used in IEEE 802.11 based WLANs, including Wi-Fi. It resides in a point coordinator also known as access point (AP), to coordinate the communication within the network. The AP waits for PIFS duration rather than DIFS duration to grasp the channel. PIFS is less than DIFS duration and hence the point coordinator always has the priority to access the channel. The PCF is located directly above the distributed coordination function (DCF), in the IEEE 802.11 MAC Architecture. Channel access in PCF mode is centralized and hence the point coordinator sends CF-Poll frame to the PCF capable station to permit it to transmit a frame. In case the polled station does not have any frames to send, then it must transmit null frame. Due to the priority of PCF over DCF, stations that only use DCF might not gain access to the medium. To prevent this, a repetition interval has been designed to cover both (Contention free) PCF & (Contention Based) DCF traffic. The repetition interval which is repeated continuously, starts with a special control frame called Beacon Frame. When stations hear the beacon frame, they start their network allocation vector for the duration of the contention free period of the repetition period. Since most APs have logical bus topologies (they are shared circuits) only one message can be processed at one time (it is a contention based system), and thus a media access control technique is required. Wireless networks may suffer from a hidden node problem where some regular nodes (which communicate only with the AP) cannot see other nodes on the extreme edge of the geographical radius of the network because the wireless signal attenuates before it can reach that far. Thus having an AP in the middle allows the distance to be halved, allowing all nodes to see the AP, and consequentially, halving the maximum distance between two nodes on the extreme edges of a circle-star topology. PCF seems to be implemented only in very few hardware devices as it is not part of the Wi-Fi Alliance's interoperability standard. PCF Interframe Space PCF Interframe Space (PIFS) is one of the interframe space used in IEEE 802.11 based Wireless LANs. PCF enabled access point wait for PIFS duration rather than DIFS to occupy the wireless medium. PIFS duration is less than DIFS and greater than SIFS (DIFS > PIFS > SIFS). Hence AP always has more priority to access the medium. PIFS duration can be calculated as follows: PIFS = SIFS + Slot time See also Contention free pollable Distributed Coordination Function Hybrid Coordination Function DIFS Short Interframe Space Arbitration inter-frame spacing Reduced Interframe Space Extended Interframe Space References External links 802.11 Medium Access Methods on wi-fiplanet.com Media access control P
49711667
https://en.wikipedia.org/wiki/DX%20cluster
DX cluster
A DX cluster is a network of computers, each running a software package dedicated to gathering, and disseminating, information on amateur radio DX (long-distance contact) activities. The computers comprising the network are called nodes, the network itself being termed a cluster of nodes. The nodes may be connected either via radio links or through the internet. Internet nodes generally connect using the telnet protocol. The system acts as an aggregator of information, accepting input from various sources, then making that data available to any user who is connected to the network. History The first DX cluster software, PacketCluster was realized by US radio amateur Dick Newell, AK1A in the late 1980s, and quickly became popular as a means of exchanging DX-related information. Before the internet became widely available, the nodes running the cluster software would connect via radio links at certain frequencies allocated within the amateur radio bands. Users of the system would then connect to a node using frequencies different from those used by the nodes. When the internet became widely available, the system was expanded to make use of telnet connections to internet nodes, in addition to the already established packet radio nodes. Users of internet nodes connect to a particular node using telnet client software. DX cluster software A number of DX cluster software packages are available, among which are: PacketCluster - the original, and still widely used, radio packet cluster software DX Spider - the most widely used of the internet-based cluster software packages CC Cluster Clusse - older cluster software References Amateur radio
28030850
https://en.wikipedia.org/wiki/Communication%20protocol
Communication protocol
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. The protocol defines the rules, syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunication protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence. Communicating systems History One of the first uses of the term protocol in a data-commutation context occurs in a memorandum entitled A Protocol for Use in the NPL Data Communications Network written by Roger Scantlebury and Keith Bartlett in April 1967. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The Network Control Program for the ARPANET was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept. Networking research in the early 1970s by Robert E. Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The International Networking Working Group agreed a connectionless datagram standard which was presented to the CCIT in 1975 but was not adopted by the ITU or by the ARPANET. International research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits by the ITU-T in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems. TCP software was redesigned as a modular protocol stack. Originally referred to as IP/TCP, it was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete protocol suite by 1989, as outlined in and , laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. Concept The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk. The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer. Types There are two types of communication protocols, based on their representation of the content being carried: text-based and binary. Text-based A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text. The immediate human readability stands in contrast to binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), and the finger protocol. Text-based protocols are typically optimized for human parsing and interpretation, and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases. To be clear, all digital communication is fundamentally binary. The “Text” based protocols mentioned here use only binary content, which is made “humanly readable” by a text editor (or other such software). Binary A binary protocol utilizes all values of a byte, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in ASCII encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol. Basic requirements Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kind of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kind of rules are said to express the semantics of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually, some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as address mapping. Routing When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called internetworking. Detection of transmission errors Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements Acknowledgement of correct reception of packets is required for connection-oriented communication. Acknowledgments are sent from receivers back to their respective senders. Loss of information - timeouts and retries Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow Direction needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links or from one sender at a time as on a shared medium. This is known as media access control. Arrangements have to be made to accommodate the case of collision or contention where two parties respectively simultaneously transmit or wish to transmit. Sequence control If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Queueing Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization Protocol design Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another. Layering In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network. Protocol layering Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers. Software layering The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary. Strict layering Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. While the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. Design patterns Commonly recurring problems in the design and implementation of communication protocols can be addressed by software design patterns. Formal specification Popular formal methods of describing communication syntax are Abstract Syntax Notation One (an ISO standard) and augmented Backus–Naur form (an IETF standard). Finite-state machine models are used to formally describe the possible interactions of the protocol. and communicating finite-state machines Protocol development For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market-shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol. The need for protocol standards The need for protocol standards can be shown by looking at what happened to the bi-sync protocol (BSC) invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. Standards organizations Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization (ISO), the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. The standardization process In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a draft proposal is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject. OSI standardization A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g. data compression and data encryption). The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. The transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were only mentioned in an addendum to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by . Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. Taxonomies Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones proposed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking equipment configuration, a term-of-art distinction is often drawn: The term "protocol" strictly refers to the transport layer, and the term "service" refers to protocols utilizing a "protocol" for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term "service" strictly refers to port numbers, and the term "application" is often used to refer to protocols identified through inspection signatures. See also Lists of network protocols Notes References Bibliography Radia Perlman: Interconnections: Bridges, Routers, Switches, and Internetworking Protocols. 2nd Edition. Addison-Wesley 1999, . In particular Ch. 18 on "network design folklore", which is also available online at http://www.informit.com/articles/article.aspx?p=20482 Gerard J. Holzmann: Design and Validation of Computer Protocols. Prentice Hall, 1991, . Also available online at http://spinroot.com/spin/Doc/Book91.html In particular Ch.11 Protocol layering. Also has a RFC guide and a Glossary of Internetworking Terms and Abbreviations. Internet Engineering Task Force abbr. IETF (1989): RFC1122, Requirements for Internet Hosts -- Communication Layers, R. Braden (ed.), Available online at http://tools.ietf.org/html/rfc1122. Describes TCP/IP to the implementors of protocolsoftware. In particular the introduction gives an overview of the design goals of the suite. M. Ben-Ari (1982): Principles of concurrent programming 10th Print. Prentice Hall International, . C.A.R. Hoare (1985): Communicating sequential processes 10th Print. Prentice Hall International, . Available online via http://www.usingcsp.com R.D. Tennent (1981): Principles of programming languages 10th Print. Prentice Hall International, . Brian W Marsden (1986): Communication network protocols 2nd Edition. Chartwell Bratt, . Andrew S. Tanenbaum (1984): Structured computer organization 10th Print. Prentice Hall International, . Further reading Radia Perlman, Interconnections: Bridges, Routers, Switches, and Internetworking Protocols (2nd Edition). Addison-Wesley 1999. . In particular Ch. 18 on "network design folklore". Gerard J. Holzmann, Design and Validation of Computer Protocols. Prentice Hall, 1991. . Also available online at http://spinroot.com/spin/Doc/Book91.html External links Javvin's Protocol Dictionary Overview of protocols in telecontrol field with OSI Reference Model Data transmission
467996
https://en.wikipedia.org/wiki/ISO%209241
ISO 9241
ISO 9241 is a multi-part standard from the International Organization for Standardization (ISO) covering ergonomics of human-computer interaction. It is managed by the ISO Technical Committee 159. It was originally titled Ergonomic requirements for office work with visual display terminals (VDTs). From 2006 on, the standards were retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. For example, two zeros in the number indicate that the document under consideration is a generic or basic standard. Fundamental aspects are regulated in standards ending with one zero. A standard with three digits other than zero in the number regulate specific aspects. The first part to be renumbered was part 10 (now renumbered to part 110). Part 1 is a general introduction to the rest of the standard. Part 2 addresses task design for working with computer systems. Parts 3–9 deal with physical characteristics of computer equipment. Parts 110 and parts 11–19 deal with usability aspects of software, including Part 110 (a general set of usability heuristics for the design of different types of dialogue) and Part 11 (general guidance on the specification and measurement of usability). Ergonomics of Human System Interaction The revised multipart standard is numbered in series as follows: 100 series: Software ergonomics 200 series: Human system interaction processes 300 series: Displays and display related hardware 400 series: Physical input devices - ergonomics principles 500 series: Workplace ergonomics 600 series: Environment ergonomics 700 series: Application domains - Control rooms 900 series: Tactile and haptic interactions Within those series, the standard currently includes the following parts: Part 100: Introduction to standards related to software ergonomics Part 110: Dialogue principles Part 112: Principles for the presentation of information Part 125: Guidance on visual presentation of information Part 129: Guidance on software individualization Part 151: Guidance on World Wide Web user interfaces Part 143: Forms Part 154: Interactive voice response (IVR) applications Part 161: Guidance on visual user interface elements Part 171: Guidance on software accessibility Part 210: Human-centred design for interactive systems Part 300: Introduction to electronic visual display requirements Part 302: Terminology for electronic visual displays Part 303: Requirements for electronic visual displays Part 304: User performance test methods for electronic visual displays Part 305: Optical laboratory test methods for electronic visual displays Part 306: Field assessment methods for electronic visual displays Part 307: Analysis and compliance test methods for electronic visual displays Part 308: Surface-conduction electron-emitter displays (SED) Part 309 (TR): Organic light-emitting diode (OLED) displays Part 310 (TR): Visibility, aesthetics and ergonomics of pixel defects Part 400: Principles and requirements for physical input devices Part 410: Design criteria for physical input devices Part 910: Framework for tactile and haptic interaction Part 920: Guidance on tactile and haptic interactions ISO 9241-110 (formerly ISO9241-10, withdrawn) Dialogue principles (2006) In 2006, it revised ISO 9241-10:1996, Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 10: Dialogue principles. This part deals with general ergonomic principles which apply to the design of dialogues between humans and information systems: suitability for the task, suitability for learning, suitability for individualization, conformity with user expectations, self-descriptiveness, controllability, and error tolerance. ISO 9241-210 (formerly ISO 13407, withdrawn) Human-centred design processes for interactive systems (1999) ISO 9241-210, Ergonomics of human-system interaction, updated in 2019, provides guidance on human-system interaction throughout the life cycle of interactive systems. With its introduction in 2008, it revised ISO 13407, Human-centred design for interactive systems. ISO-9241-302, 303, 305, 307:2008 pixel defects Of particular interest to the lay computer user are the definitions of flat-panel TV and monitor pixel defects provided in the ISO-9241-3xx series of standards (which renders obsolete ISO 13406-2). These identify three classes for measuring pixel defects in flat panel monitors: Class 0 panels are completely defect-free, including no full pixel or sub-pixel defects. Class 1 panels permit any or all of the following: 1 full bright (“stuck on white”) pixel 1 full dark (“stuck off”) pixel 2 single or double bright or dark sub-pixels 3 to 5 “stuck on” or “stuck off” sub-pixels (depending on the number of each) Class 2 panels permit any or all of the following: 2 full bright pixels 2 full dark pixels 5-10 single or double bright or dark sub-pixels (again, depending on the number of each; no more than 5 bright (“stuck on”) subpixels are permitted). Class 3 panels permit any or all of the following: 5 full bright pixels 15 full dark pixels 50 single or double sub-pixels stuck on or off (allowed pixed defects per 1 (one) million pixels in the TFT/LCD matrix) As of 2010, most premium branded panel manufacturers specify their products as Class 0, expecting a small number of returns due to early failure where a particular item fails to meet Class 0 but would meet Class 1. Budget panel manufacturers tend to specify their products as Class 1. Most premium branded finished product manufacturers (retail TVs, monitors, Laptops, etc.) tend to specify their products as meeting Class 1 even when they have a Class 0 specified panel inside. Some premium branded finished product manufacturers have started to specify their products as Class 0 or offer a Class 0 guarantee for an additional premium. Previous version ISO 9241 was originally titled Ergonomic requirements for office work with visual display terminals (VDTs) and consisted of the following parts: Part 1: General introduction Part 2: Guidance on task requirements Part 3: Visual display requirements Part 4: Keyboard requirements Part 5: Workstation layout and postural requirements Part 6: Guidance on the work environment Part 7: Display requirements with reflections Part 8: Requirements for displayed colors Part 9: Requirements for non-keyboard input devices Part 10: Dialogue principles Part 11: Guidance on usability Part 12: Presentation of information Part 13: User guidance Part 14: Menu dialogues Part 15: Command dialogues Part 16: Direct manipulation dialogues Part 17: Form filling dialogues Part 20: Accessibility guidelines for ICT equipment and services ISO 9241-1 Part 1: (1997) Ergonomic requirements for office work with visual display terminals (VDTs) : General Introduction This part introduces the multi-part standard ISO 9241 for the ergonomic requirements for the use of visual display terminals for office tasks and explains some of the basic underlying principles. It provides some guidance on how to use the standard and describes how conformance to parts of ISO 9241 should be reported. ISO 9241-2 Part 2: (1993) Guidance on task requirements This part deals with the design of tasks and jobs involving work with visual display terminals. It provides guidance on how task requirements may be identified and specified within individual organisations and how task requirements can be incorporated into the system design and implementation process. ISO 9241-3 Part 3: (1993, deprecated) Visual display requirements This part specifies the ergonomics requirements for display screens which ensure that they can be read comfortably, safely and efficiently to perform office tasks. Although it deals specifically with displays used in offices, it is appropriate to specify it for most applications that require general purpose displays to be used in an office-like environment. ISO 9241-4 Part 4: (1998) Keyboard requirements This part specifies the ergonomics design characteristics of an alphanumeric keyboard which may be used comfortably, safely and efficiently to perform office tasks. Keyboard layouts are dealt with separately in various parts of ISO/IEC 9995: 1994 Information Processing - Keyboard Layouts for Text and Office Systems ISO 9241-5 Part 5: (1998) Workstation layout and postural requirements This part specifies the ergonomics requirement for a Visual Display Terminal workplace which will allow the user to adopt a comfortable and efficient posture. ISO 9241-6 Part 6: (1999) Environmental requirements This part specifies the ergonomics requirements for the Visual Display Terminal working environment which will provide the user with comfortable, safe and productive working conditions. ISO 9241-7 Part 7: (1998, deprecated) Display requirements with reflections This part specifies methods of measurement of glare and reflections from the surface of display screens, including those with surface treatments. ISO 9241-8 Part 8: (1997, deprecated) Requirements for displayed colors This part specifies the requirements for multicolour displays which are largely in addition to the monochrome requirements in Part 3. ISO 9241-9 Part 9: (2000) Requirements for non-keyboard input devices This part specifies the ergonomics requirements for non-keyboard input devices which may be used in conjunction with a visual display terminal. It also includes a suggestion for a user-based performance test as an alternative way of showing conformance. The standard covers such devices as the mouse, trackball and other pointing devices, but it does not address voice input. ISO 9241-10 Part 10 (1996, withdrawn) "Dialogue principles": Gives ergonomic principles formulated in general terms; they are presented without reference to situations of use, application, environment or technology. These principles are intended to be used in specifications, design and evaluation of dialogues for office work with visual display terminals (VDTs). ISO 9241-11 Part 11: (1998) To examine the quality of how well tasks are fulfilled by the users (usability testing), ISO 9241-11 framework can be employed. There are three components in the framework: System Effectiveness to examine the users’ ability to complete the given tasks, System Efficiency to examine the required user resources to complete the tasks, and System Satisfaction to record the users’ opinions and feedback. System Effectiveness: Participants are asked to complete six tasks, and the success or failure rate of completing each task is measured to evaluate the app’s efficiency. Task completion is considered successful when the user completed the task without producing an error or asking for assistance. System Efficiency: For evaluating system efficiency, the researcher records the time (in seconds) that participants took to complete each task. Each task is initiated by expressing the word ‘start’ and finished when the user mentioned the end. Upon completing each task, the user is asked to fill out the Single Ease Questionnaire (SEQ) to examine the level of task difficulty. The questionnaire is a seven-point Likert scale in which scale 1 indicates the task as ‘very difficult’, and scale 7 indicates the task as ‘very easy’. System Satisfaction is used to evaluate the overall usability of the apps through System Usability Scale (SUS), which is a usability assessment questionnaire with reliable and valid results. It includes ten questions, each with five items ranging from ‘strongly agree’ to ‘strongly disagree’. The score range is from 0 to 100 and scores higher than 80 are considered high usability while those below 70 are considered low usability. ISO 9241-12 Part 12: (1998) Presentation of information This part contains specific recommendations for presenting and representing information on visual displays. It includes guidance on ways of representing complex information using alphanumeric and graphical/symbolic codes, screen layout, and design as well as the use of windows. ISO 9241-13 Part 13: (1998) User guidance This part provides recommendations for the design and evaluation of user guidance attributes of software user interfaces including Prompts, Feedback, Status, On-line Help and Error Management. ISO 9241-14 Part 14: (1997) Menu dialogues This part provides recommendations for the ergonomic design of menus used in user-computer dialogues. The recommendations cover menu structure, navigation, option selection and execution, and menu presentation (by various techniques including windowing, panels, buttons, fields, etc.). ISO 9241-15 Part 15: (1998) Command language dialogues This part provides recommendations for the ergonomic design of command languages used in user-computer dialogues. The recommendations cover command language structure and syntax, command representations, input and output considerations, and feedback and help. ISO 9241-16 Part 16: (1999) Direct manipulation dialogues This part provides recommendations for the ergonomic design of direct manipulation dialogues, and includes the manipulation of objects, and the design of metaphors, objects and attributes. It covers those aspects of Graphical User Interfaces that are directly manipulated, and not covered by other parts of ISO 9241. ISO 9241-17 Part 17: (1998) Form-filling dialogues This part provides recommendations for the ergonomic design of form filling dialogues. The recommendations cover form structure and output considerations, input considerations, and form navigation. References 09241
42221633
https://en.wikipedia.org/wiki/Finch%20%28software%29
Finch (software)
Finch is an open-source console-based instant messaging client, based on the libpurple library. Libpurple has support for many commonly used instant messaging protocols, allowing the user to log in to various services from one application. Finch uses GLib and ncurses. Finch supports OTR via a libpurple plugin. See also Multiprotocol instant messaging application Comparison of instant messaging protocols Comparison of instant messaging clients Comparison of Internet Relay Chat clients Comparison of XMPP clients Online chat References External links Finch Help Page Finch Source Code Free instant messaging clients Instant messaging clients for Linux AIM (software) clients Free XMPP clients Free Internet Relay Chat clients Internet Relay Chat clients Portable software Cross-platform software Free software programmed in C Yahoo! instant messaging clients Cross-platform free software Free communication software Software that uses ncurses
60555994
https://en.wikipedia.org/wiki/Microsoft%20and%20open%20source
Microsoft and open source
Microsoft, a technology company historically known for its opposition to the open source software paradigm, turned to embrace the approach in the 2010s. From the 1970s through 2000s under CEOs Bill Gates and Steve Ballmer, Microsoft viewed the community creation and sharing of communal code, later to be known as free and open source software, as a threat to its business, and both executives spoke negatively against it. In the 2010s, as the industry turned towards cloud, embedded, and mobile computing—technologies powered by open source advances—CEO Satya Nadella led Microsoft towards open source adoption although Microsoft's traditional Windows business continued to grow throughout this period generating revenues of 26.8 billion in the third quarter of 2018, while Microsoft's Azure cloud revenues nearly doubled. Microsoft open sourced some of its code, including the .NET Framework, and made investments in Linux development, server technology, and organizations, including the Linux Foundation and Open Source Initiative. Linux-based operating systems power the company's Azure cloud services. Microsoft acquired GitHub, the largest host for open source project infrastructure, in 2018. Microsoft is among the site's most active contributors. This acquisition led a few projects to migrate away from GitHub. This proved a short lived phenomenon because by 2019 there were over 10 million new users of GitHub. Since 2017, Microsoft is one of the biggest open source contributors in the world, measured by the number of employees actively contributing to open source projects on GitHub, the largest host of source code in the world. History Initial stance on open source The paradigm of freely sharing computer source code—a practice known as open source—traces back to the earliest commercial computers, whose user groups shared code to reduce duplicate work and costs. Following an antitrust suit that forced the unbundling of IBM's hardware and software, a proprietary software industry grew throughout the 1970s, in which companies sought to protect their software products. The technology company Microsoft was founded in this period and has long been an embodiment of the proprietary paradigm and its tension with open source practices, well before the terms "free software" or "open source" were coined. Within a year of founding Microsoft, Bill Gates wrote an open letter that positioned the hobbyist act of copying software as a form of theft. Microsoft successfully expanded in personal computer and enterprise server markets through the 1990s, partially on the strength of the company's marketing strategies. By the late 1990s, Microsoft came to view the growing open source movement as a threat to their revenue and platform. Internal strategy memos from this period, known as the Halloween documents, describe the company's potential approaches to stopping open source momentum. One strategy was "embrace-extend-extinguish", in which Microsoft would adopt standard technology, add proprietary extensions, and upon establishing a customer base, would lock consumers into the proprietary extension to assert a monopoly of the space. The memos also acknowledged open source as a methodology capable of meeting or exceeding proprietary development methodology. Microsoft downplayed these memos as the opinions of an individual employee and not Microsoft's official position. While many major companies worked with open source software in the 2000s, the decade was also marked by a "perennial war" between Microsoft and open source in which Microsoft continued to view open source as a scourge on its business and developed a reputation as the archenemy of the free and open source movement. Bill Gates and Microsoft CEO Steve Ballmer suggested free software developers and the Linux kernel were communist. Ballmer also likened Linux to a kind of cancer on intellectual property. Microsoft sued Lindows, a Linux operating system that could run Microsoft Windows applications, as a trademark violation. The court rejected the claim and after Microsoft purchased its trademark, the software changed its name to Linspire. In 2002, Microsoft began experimenting with 'shared source', including the Shared Source Common Language Infrastructure, the core of .NET Framework. Adoption 2000s In April 2004, Windows Installer XML (WiX) was the first Microsoft project to be released under an open-source license, the Common Public License. Initially hosted on SourceForge, it was also the first Microsoft project to be hosted externally. In June 2004, for the first time Microsoft was represented with a booth at LinuxTag, a free software exposition, held annually in Germany. LinuxTag claims to be Europe's largest exhibition for open source software. In August 2004, Microsoft made the complete source code of the Windows Template Library (WTL) available under the Common Public License and released it through SourceForge. Since version 9.1, the library is licensed under the Microsoft Public License. In September 2004, Microsoft released its FlexWiki, making its source code available on SourceForge. The engine is open source, also licensed under the Common Public License. FlexWiki was the third Microsoft project to be distributed via SourceForge, after WiX and Windows Template Library. In 2005, Microsoft released the F# programming language under the Apache License 2.0. In 2006, Microsoft launched its CodePlex open source code hosting site, to provide hosting for open-source developers targeting Microsoft platforms. In the same year, Microsoft ported PHP to Windows under PHP License and also partnered with and commissioned Vertigo Software to create Family.Show, a free and open-source genealogy program, as a reference application for Microsoft's latest UI technology and software deployment mechanism at the time, Windows Presentation Foundation and ClickOnce. The source code has been published on CodePlex and is licensed under the Microsoft Public License. In November 2006, Microsoft and Novell announced a broad partnership to make sure Windows interoperates with SUSE Linux. The initial agreement endured until 2012 and included promises not to sue over patents as well as joint development, marketing and support of Windows – Linux interoperability solutions. In addition, Microsoft and Novell agreed to work to ensure documents created in the free OpenOffice.org productivity suite can seamlessly work in Office 2007, and vice versa. Both companies also agreed to develop on translators to improve interoperability between Office Open XML and OpenDocument formats. The company also purchased 70,000 one-year SUSE Linux Enterprise Server maintenance and update subscription coupons from Novell. Microsoft could distribute the coupons to customers as a way to convince them to choose Novell's Linux rather than a competitor's Linux distribution. Microsoft CEO Steve Ballmer acknowledged that more customers are running mixed systems and said about the partnership with Novell: In June 2007, Tom Hanrahan, former Director of Engineering at the Linux Foundation, became Microsoft's Director of Linux Interoperability. The Open Source Initiative approved the Microsoft Public License (MS-PL) and Microsoft Reciprocal License (MS-RL) in 2007. Microsoft open sourced IronRuby, IronPython, and xUnit.net under MS-PL in 2007. In 2008, Microsoft joined the Apache Software Foundation and co-founded the Open Web Foundation with Google, Facebook, Sun, IBM, Apache, and others. Also in 2008, Microsoft began distributing the open source jQuery JavaScript library together with the Visual Studio development environment for use within the ASP.NET AJAX and ASP.NET MVC frameworks. When Microsoft released Hyper-V in 2008, SUSE Linux Enterprise Server became the first non-Windows operating system officially supported on Hyper-V. Microsoft and Novell signed an agreement to work on interoperability two years earlier. Microsoft first began contributing to the Linux kernel in 2009. The CodePlex Foundation, an independent 501(c)(6) non-profit corporation founded by Microsoft and led mostly by Microsoft employees and affiliates, was founded in September 2009. Its goal was to "enable the exchange of code and understanding among software companies and open source communities." Later in September 2010, the name Outercurve Foundation was adopted. In November 2009, Microsoft released the source code of the .NET Micro Framework to the development community as free and open-source software under the Apache License 2.0. StyleCop, an originally proprietary static code analysis tool by Microsoft, was re-released as an open-source in April 2010 on CodePlex. Based on customer feedback, Microsoft relicensed IronRuby, IronPython, and the Dynamic Language Runtime (DLR) under Apache License 2.0 in July 2010. Microsoft signed the Joomla contributor agreement and started upstreaming improvements in 2010. 2010s In 2011, Microsoft started contributing code to the Samba project. The same year, Microsoft also ported Node.js to Windows, upstreaming the code under Apache License 2.0. The first version of Python Tools for Visual Studio (PTVS) was released in March 2011. After acquiring Skype in 2011, Microsoft continued maintaining the Skype Linux client. In July 2011, Microsoft was the fifth largest contributor to the Linux 3.0 kernel at 4% of the total changes. The company became a partner with LinuxTag for their 2011 event and also sponsored LinuxTag 2012. In 2012, Microsoft began hosting Linux virtual machines in the Azure cloud computing service and CodePlex introduced git support. The company also ported Apache Hadoop to Windows, upstreaming the code under MIT License. In March 2012, a completely rewritten version of ChronoZoom was made available as open source via the Outercurve Foundation. Also, ASP.NET, ASP.NET MVC, ASP.NET Razor, ASP.NET Web API, Reactive extensions, and IL2JS (an IL to JavaScript compiler) were released under Apache License 2.0. The TypeScript programming language was released under Apache License 2.0 in 2012. It was the first Microsoft project hosted on GitHub. In June 2012, Microsoft contributed Open Management Infrastructure to The Open Group with the goal "to remove all obstacles that stand in the way of implementing standards-based management so that every device in the world can be managed in a clear, consistent, coherent way and to nurture [and] spur a rich ecosystem of standards-based management products." In 2013, Microsoft relicensed the xUnit.net unit testing tool for the .NET Framework under Apache License 2.0 and transferred it to the Outercurve Foundation. Also in 2013, Microsoft added Git support to Visual Studio and Team Foundation Server using libgit2, the most widely deployed version of Git. The company is dedicating engineering hours to help further develop libgit2 and working with GitHub and other community programmers who devote time to the software. In 2014, Satya Nadella was named the new CEO of Microsoft. Microsoft began to adopt open source into its core business. In contrast to Ballmer's stance, Nadella presented a slide that read, "Microsoft loves Linux". At the time of the acquisition of GitHub, Nadella said of Microsoft, "We are all in on open source." As the industry trended towards cloud, embedded, and mobile computing, Microsoft turned to open source to stay apace in these open source dominated fields. Microsoft's adoption of open source included several surprising turns. In 2014, the company opened the source of its .NET Framework to promote its software ecosystem and stimulate cross-platform development. Microsoft also started contributing to the OpenJDK the same year. The Wireless Display Adapter, released in 2014, was Microsoft's first hardware device to use embedded Linux. In the beginning of 2015, Microsoft open sourced the Z3 Theorem Prover, a cross-platform satisfiability modulo theories (SMT) solver. Also in 2015, Microsoft co-founded the Node.js Foundation and joined the R Foundation. After completing the acquisition of Revolution Analytics in 2015, Microsoft integrated the open source R programming language into SQL Server 2016, SQL Server 2017, SQL Server 2019, Power BI, Azure SQL Managed Instance, Azure Cortana Intelligence, Microsoft ML Server and Visual Studio 2017. The same year, Microsoft also open sourced Matter Center, Microsoft's legal practice management software and also Chakra, the Microsoft Edge JavaScript engine at the time. Also in 2015, Microsoft released Windows 10 with native support for the open-source AllJoyn framework, which means that any Windows 10 device can control any AllJoyn-aware Internet of Things (IoT) device in the network. Microsoft has been developing AllJoyn support and contributing code upstream since 2014. Microsoft opened the keynote speech at All Things Open in 2015 by stating that: In August 2015, Microsoft released WinObjC, also known as Windows Bridge for iOS, an open-source middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10. On November 18, 2015, Visual Studio Code was released under the proprietary Microsoft License and a subset of its source code was posted to GitHub under the MIT License. In January 2016, Microsoft became Gold Sponsor of SCALE 14x – the fourteenth annual Southern California Linux Expo, a major convention. When Microsoft acquired Xamarin and LinkedIn in 2016, it relicensed the Mono framework under MIT License and continued maintaining the Kafka stream-processing software platform as open source. Also in 2016, Microsoft introduced the Windows Subsystem for Linux, which lets Linux applications run on the Windows operating system. The company invested in Linux server technology and Linux development to promote cross-platform compatibility and collaboration with open source companies and communities, culminating with Microsoft's platinum sponsorship of the Linux Foundation and seat on its Board of Directors. Microsoft released SQL Server and the now open source PowerShell for Linux. Also, Microsoft began porting Sysinternals tools, including ProcDump and ProcMon, to Linux. R Tools for Visual Studio were released under Apache License 2.0 in March 2016. In March 2016, Ballmer changed his stance on Linux, saying that he supports his successor Satya Nadella's open source commitments. He maintained that his comments in 2001 were right at the time but that times have changed. Commentators have noted the adoption of open source and the change of strategy at Microsoft: At EclipseCon in March 2016, Microsoft announced that the company is joining the Eclipse Foundation as a Solutions Member. The BitFunnel search engine indexing algorithm and various components of the Microsoft Bing search engine were made open source by Microsoft in 2016. vcpkg, a cross-platform open source package manager, was released in September 2016. Microsoft joined the Open Source Initiative, the Cloud Native Computing Foundation, and the MariaDB Foundation in 2017. The Open Source Initiative, formerly a target of Microsoft, used the occasion of Microsoft's sponsorship as a milestone for open source software's widespread acceptance. The Debian-based SONiC network operating system was open sourced by Microsoft in 2017. Also the same year, the Windows development was moved to Git and Microsoft open sourced the Git Virtual File System (GVFS) developed for that purpose. Other contributions to Git include a number of performance improvements useful when working with large repositories. Microsoft opened the Microsoft Store to open source applications and gave the keynote speech at the Open Source Summit North America 2017 in Los Angeles. In 2018, the Microsoft CTO of Data spoke with ZDNet about the growing importance of open source stating that: Microsoft became Platinum Sponsor and delivered the keynote of the 2018 Southern California Linux Expo – the largest community-run open-source and free software conference in North America. Microsoft developed Linux-based operating systems for use with its Azure cloud services. Azure Cloud Switch supports the Azure infrastructure and is based on open source and proprietary technology, and Azure Sphere powers Internet of things devices. As part of its announcement, Microsoft acknowledged Linux's role in small devices where the full Windows operating system would be unnecessary. Also in 2018, Microsoft acquired GitHub, the largest host for open source project infrastructure. Microsoft is among the site's most active contributors and the site hosts the source code for Microsoft's Visual Studio Code and .NET runtime system. The company, though, has received some criticism for only providing limited returns to the Linux community, since the GPL license lets Microsoft modify Linux source code for internal use without sharing those changes. In 2018, Microsoft included OpenSSH, tar, and curl commands in Windows. Also, Microsoft released Windows Calculator as open source under MIT License on GitHub. Since 2018, Microsoft has been a sponsor of the AdoptOpenJDK project. It is a drop-in replacement for Oracle's Java/JDK. In April 2018, Microsoft released the File Manager source code licensed under the MIT License. In August 2018, Microsoft added support for the open source Python programming language to Power BI. In October 2018, Microsoft joined the Open Invention Network and cross-licensed 60,000 patents with the open source community. In 2019, Microsoft's Windows Subsystem for Linux 2 transitioned from an emulated Linux kernel to a full Linux kernel within a virtual machine, improving processor performance manifold. In-keeping with the GPL open source license, Microsoft will submit its kernel improvements for accommodation into the master, public release. Also in 2019, Microsoft released Windows Terminal, PowerToys, and the Microsoft C++ Standard Library as open source and transitioned its Edge browser to use the open source Chromium as the basis. The Windows Console infrastructure was open-sourced under the MIT License alongside Windows Terminal. After publishing exFAT as an open specification, Microsoft contributed the patents to the Open Invention Network (OIN), and started upstreaming the device driver to the Linux kernel. At Build 2019, Microsoft announced that it is open-sourcing its Quantum Development Kit, including its Q# compilers and simulators. In December 2019, Microsoft released Microsoft Teams for Linux. This marked the first time Microsoft released an Office app for the Linux operating system. The app is available in native packages in .deb and .rpm formats. Also in December 2019, after JS Foundation and Node.js Foundation merged to form OpenJS Foundation, Microsoft contributed the popular cross-platform desktop application development tool Electron to OpenJS Foundation. 2020s Project Verona, a memory-safe research programming language, was open sourced in January 2020. Microsoft released DeepSpeed, an open source deep learning optimization library for PyTorch, in February 2020. In 2020, Microsoft open sourced the Java extension for Microsoft SQL Server, MsQuic (a Windows NT kernel library for the QUIC general-purpose transport layer network protocol), Project Petridish, a neural architecture search algorithm for deep learning, and the Fluid Framework for building distributed, real-time collaborative web applications. Microsoft also released the Linux-based Azure Sphere operating system. In March 2020, Microsoft acquired npm, the open source Node package manager. It is the world’s largest software registry with more than 1.3 million packages that have 75 billion downloads a month. Also in March 2020, Microsoft together with researchers and leaders from the Allen Institute for AI, the Chan Zuckerberg Initiative, the Georgetown University's Center for Security and Emerging Technhology, and the National Library of Medicine released CORD-19, a public dataset of academic articles about COVID-19 and research related to the COVID-19 pandemic. The dataset is created through the use of text mining of the current research literature. After exploring different alternative options and talking with various well-known commercial and open source package manager teams including Chocolatey, Scoop, Ninite and others such as AppGet, Npackd and the PowerShell based OneGet package manager-manager, Microsoft decided to develop and release the open source Windows Package Manager in 2020. Microsoft was one of the silver sponsors for the X.Org Developer’s Conference 2020 (XDC2020). Microsoft had multiple developers presenting on the opening day. Microsoft completed the first phase of porting the Java OpenJDK for Windows 10 on ARM devices in June 2020. In August 2020, Microsoft became founding member of the Open Source Security Foundation (OpenSSF), a cross-industry forum for a collaborative effort to improve open source software security. In September 2020, Microsoft released the Surface Duo, an Android-based smartphone with a Linux kernel. The same month, Microsoft released OneFuzz, a self-hosted fuzzing-as-a-service platform that automates the detection of software bugs. It supports Windows and Linux. Microsoft is a major contributor to the Chromium project with the highest percentage of all non-Google contributors coming from Microsoft (35.2%). The company has contributed 29.4% of all non-Google commits to the source code in 2020. CBL-Mariner, a cloud infrastructure operating system based on Linux and developed by the Linux Systems Group at Microsoft for its edge network services and as part of its Microsoft Azure cloud infrastructure was open sourced in 2020. In February 2021, Microsoft made the source code for its Extensible Storage Engine (ESE) available on GitHub under MIT License. Also in February 2021, Microsoft, together with four other founding companies (AWS, Huawei, Google, and Mozilla) formed the Rust Foundation as an independent non-profit organization to steward the open source Rust programming language and ecosystem. In March 2021, Microsoft became founding member of the new Eclipse Adoptium Working Group whose goal is to promote free, open source Java runtimes. Microsoft released a preview of the Microsoft Build of OpenJDK in April 2021. It is available for x64 server and desktop editions of Windows, as well as on Linux and macOS. The company provides long-term support for this distribution of the OpenJDK. In April 2021, Microsoft also released a Windows 10 test build that includes the ability to run Linux graphical user interface (GUI) apps using Windows Subsystem for Linux 2. In the following month, Microsoft launched an open source project to make the Berkeley Packet Filter work on Windows. At the Windows 11 announcement event in June 2021, Microsoft showcased the new Windows Subsystem for Android (WSA) that will enable support for the Android Open Source Project (AOSP) and will allow users to run Android apps on their Windows desktop. In August 2021, Microsoft announced that it is expanding its partnership to become a Strategic Member at the Eclipse Foundation. Support of open source organizations Microsoft is either founding member, joining member, contributing member, and/or sponsor of a number of open source related organizations and initiatives. Examples include: Selected products .NET – Managed code software framework for Windows, Linux, and macOS operating systems .NET Bio – Bioinformatics and genomics library created to enable simple loading, saving and analysis of biological data .NET Compiler Platform (Roslyn) – Compilers and code analysis APIs for C# and Visual Basic .NET programming languages .NET Gadgeteer – Rapid-prototyping standard for building small electronic devices .NET MAUI – A cross-platform UI toolkit .NET Micro Framework – .NET Framework platform for resource-constrained devices AirSim – Simulator for drones, cars and other objects, built as a platform for AI research Allegiance – Multiplayer online game providing a mix of real-time strategy and player piloted space combat gameplay ASP.NET ASP.NET AJAX ASP.NET Core ASP.NET MVC ASP.NET Razor ASP.NET Web Forms Atom – Text and source code editor for macOS, Linux, and Microsoft Windows Babylon.js – A real time 3D engine using a JavaScript library for displaying 3D graphics in a web browser via HTML5 BitFunnel – A signature-based search engine Blazor – Web framework that enables developers to create web apps using C# and HTML Bosque – Functional programming language C++/WinRT – C++ library for Microsoft's Windows Runtime platform, designed to provide access to modern Windows APIs C# – General-purpose, multi-paradigm programming language encompassing strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines CBL-Mariner – Cloud infrastructure operating system based on Linux ChakraCore – JavaScript engine ChronoZoom – Project that visualizes time on the broadest possible scale from the Big Bang to the present day CLR Profiler – Memory profiler for the .NET Framework Conference XP – Video conferencing platform Dafny – Imperative compiled language that targets C# and supports formal specification through preconditions, postconditions, loop invariants and loop variants Dapr – Event-driven, portable runtime system designed to support cloud native and serverless computing DeepSpeed – Deep learning optimization library for PyTorch Detours – C++ library for intercepting, monitoring and instrumenting binary functions on Microsoft Windows DiskSpd – Command-line tool for storage benchmarking that generates a variety of requests against computer files, partitions or storage devices Dynamic Language Runtime – Runtime that runs on top of the CLR and provides computer language services for dynamic languages eBPF on Windows – Register-based virtual machine designed to run a custom 64-bit RISC-like architecture via just-in-time compilation inside the kernel Extensible Storage Engine – An ISAM database engine that provides transacted data update and retrieval F* – Functional programming language inspired by ML and aimed at program verification F# – General purpose, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods File Manager – File manager for Microsoft Windows Fluid Framework, a platform for real-time collaboration across applications FourQlib – Reference implementation of the FourQ elliptic curve GW-BASIC – Dialect of the BASIC programming language Microsoft C++ Standard Library – Implementation of the C++ Standard Library (also known as the STL) MonoDevelop – Integrated development environment for Linux, macOS, and Windows MSBuild – Build tool set for managed code as well as native C++ code MsQuic – Implementation of the IETF QUIC protocol Neural Network Intelligence – An AutoML toolkit npm – Package manager for the JavaScript programming language OneFuzz – Cross-platform fuzz testing framework Open Live Writer – Desktop blogging application Open Management Infrastructure – CIM management server Open XML SDK – set of managed code libraries to create and manipulate Office Open XML files programmatically Orleans – Cross-platform software framework for building scalable and robust distributed applications based on the .NET Framework P – Programming language for asynchronous event-driven programming and the IoT Power Fx – Low-code, general-purpose programming language for expressing logic across the Microsoft Power Platform PowerShell – Command-line shell and scripting language Process Monitor – Tool that monitors and displays in real-time all file system activity ProcDump – Command-line application for creating crash dumps during a CPU spike Project Mu – UEFI core used in Microsoft Surface and Hyper-V products Project Verona – Experimental memory-safe research programming language PowerToys for Windows 10 – System utilities for power users ReactiveX – A set of tools allowing imperative programming languages to operate on sequences of data regardless of whether the data is synchronous or asynchronous implementating reactive programming RecursiveExtractor – An archive file extraction library written in C# Sandcastle – Documentation generator StyleCop – Static code analysis tool that checks C# code for conformance to recommended coding styles and a subset of the .NET Framework design guidelines Terminal – Terminal emulator TypeScript – Programming language similar to JavaScript, among the most popular on GitHub U-Prove – Cross-platform technology and accompanying SDK for user-centric identity management vcpkg – Cross-platform package manager used to simplify the acquisition and installation of third-party libraries VFS for Git – Virtual file system extension to the Git version control system Visual Basic .NET – Multi-paradigm, object-oriented programming language Visual Studio Code – Source code editor and debugger for Windows, Linux and macOS, and GitHub's top open source project VoTT (Visual Object Tagging Tool) – Electron app for image annotation and labeling Vowpal Wabbit – online interactive machine learning system library and program WikiBhasha – Multi-lingual content creation application for the Wikipedia online encyclopedia Windows Calculator – Software calculator Windows Communication Foundation – runtime and a set of APIs for building connected, service-oriented applications Windows Console – Terminal emulator Windows Driver Frameworks – Tools and libraries that aid in the creation of device drivers for Microsoft Windows Windows Forms – Graphical user interface (GUI) class library Windows Package Manager – Package manager for Windows 10 Windows Presentation Foundation – Graphical subsystem (similar to WinForms) for rendering user interfaces in Windows-based applications Windows Template Library – Object-oriented C++ template library for Win32 development Windows UI Library – Set of UI controls and features for the Universal Windows Platform (UWP) WinJS – JavaScript library for cross-platform app development WinObjC – Middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10 WiX (Windows Installer XML Toolset) – Toolset for building Windows Installer packages from XML WorldWide Telescope – Astronomy software XML Notepad – XML editor XSP – Standalone web server written in C# that hosts ASP.NET for Unix-like operating systems xUnit.net – Unit testing tool for the .NET Framework Z3 Theorem Prover – Cross-platform satisfiability modulo theories (SMT) solver See also Free software movement History of free and open-source software Timeline of free and open-source software Comparison of open-source and closed-source software Business models for open-source software References Bibliography Further reading External links Open source releases from Microsoft Open source History of free and open-source software
62423851
https://en.wikipedia.org/wiki/Outline%20of%20web%20design%20and%20web%20development
Outline of web design and web development
The following outline is provided as an overview of and topical guide to web design and web development, two very related fields: Web design encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; interface design; authoring, including standardized code and proprietary software; user experience design; and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all. The term web design is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and if their role involves creating markup then they are also expected to be up to date with web accessibility guidelines. Web development is the work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web-based internet applications (web apps), electronic businesses, and social network services. A more comprehensive list of tasks to which web development commonly refers, may include web engineering, web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development. Among web professionals, "web development" usually refers to the main non-design aspects of building web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills. For larger organizations and businesses, web development teams can consist of hundreds of people (web developers) and follow standard methods like Agile methodologies while developing websites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behaviour and visuals that run in the user browser, back-end developers deal with the servers and full-stack developers are responsible for both. Currently, the demand for React and Node.JS developers are very high all over the world. Web design Graphic design Typography Page layout User experience design (UX design) User interface design (UI design) Web Design techniques Responsive web design (RWD) Adaptive web design (AWD) Progressive enhancement Tableless web design Software Adobe Photoshop Adobe Illustrator Sketch (software) Affinity Designer Inkscape Web development Front-end web development – the practice of converting data to a graphical interface, through the use of HTML, CSS, and JavaScript, so that users can view and interact with that data. HTML (HyperText Markup Language) (*.html) CSS (Cascading Style Sheets) (*.css) CSS framework JavaScript (*.js) Package managers for JavaScript npm (originally short for Node Package Manager) Server-side scripting (also known as "Server-side (web) development" or "Back-end (web) development") ActiveVFP (*.avfp) ASP (*.asp) ASP.NET Web Forms (*.aspx) ASP.NET Web Pages (*.cshtml, *.vbhtml) ColdFusion Markup Language (*.cfm) Go (*.go) Google Apps Script (*.gs) Hack (*.php) Haskell (*.hs) (example: Yesod) Java (*.jsp) via JavaServer Pages JavaScript using Server-side JavaScript (*.ssjs, *.js) (example: Node.js) Lasso (*.lasso) Lua (*.lp *.op *.lua) NodeJS (*.node) Parser (*.p) Perl via the CGI.pm module (*.cgi, *.ipl, *.pl) PHP (*.php, *.php3, *.php4, *.phtml) Progress WebSpeed (*.r,*.w) Python (*.py) (examples: Pyramid, Flask, Django) R (*.rhtml) – (example: rApache) React (*.jsx) Ruby (*.rb, *.rbw) (example: Ruby on Rails) SMX (*.smx) Tcl (*.tcl) WebDNA (*.dna,*.tpl) Full stack web development – involves both front-end and back-end (server-side) development Software Atom IntelliJ IDEA Sublime Text Visual Studio Code See also Outline of computers Outline of computing and Outline of information technology Outline of computer science Outline of artificial intelligence Outline of cryptography Outline of the Internet Outline of Google Outline of software Types of software Outline of free software Outline of search engines Outline of software development Outline of software engineering Outline of web design and web development Outline of computer programming Programming languages Outline of C++ Outline of Perl Outline of computer engineering References External links Computer programming
1330222
https://en.wikipedia.org/wiki/Ventrilo
Ventrilo
Ventrilo (or Vent for short) is a proprietary VoIP software that includes text chat. The Ventrilo client and server are both available as freeware for use with up to 8 people on the same server. Rented servers can maintain up to 400 people. The Ventrilo server is available under a limited license for Microsoft Windows and macOS and is accessible on FreeBSD Kopi, Solaris and NetBSD. The client is available for Windows and macOS. However, the macOS client is still unable to properly use most servers because of a lack of support for the sparsely used GSM codec. Flagship Industries does not offer a Linux Ventrilo client. Third party Ventrilo clients are available for mobile devices, such as Ventrilode for iPhone and Ventriloid for Android. Ventrilo supports GSM Full Rate and Speex codecs. Usage Ventrilo is sometimes used by gamers who use the software to communicate with other players on the same team of a multiplayer game, or for general chat. This usage inspired the 2006 smash single 'Vi Sitter i Ventrilo och Spelar DotA' by Swedish artist Basshunter. Controversy Flagship Industries Inc. has been involved in several copyright infringement cases as plaintiff. See also Comparison of VoIP software Discord Mumble Roger Wilco Skype TeamSpeak References 2002 software Freeware MacOS multimedia software VoIP software Windows multimedia software