id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
44437928
https://en.wikipedia.org/wiki/2017%20NCAA%20Division%20I%20Men%27s%20Basketball%20Tournament
2017 NCAA Division I Men's Basketball Tournament
The 2017 NCAA Division I Men's Basketball Tournament involved 68 teams playing in a single-elimination tournament to determine the men's National Collegiate Athletic Association (NCAA) Division I college basketball national champion for the 2016–17 season. The 79th edition of the tournament began on March 14, 2017, and concluded with the championship game on April 3 at University of Phoenix Stadium in Glendale, Arizona. The championship game was the first to be contested in a Western state since 1995 when Seattle was the host of the Final Four for that year. In the Final Four, North Carolina beat Oregon (making their first Final Four appearance since 1939) while Gonzaga defeated South Carolina (both making their first ever Final Four appearance). North Carolina then defeated Gonzaga 71–65 to win the national championship. Tournament procedures A total of 68 teams entered the 2017 tournament, with all 32 conference tournament winners receiving an automatic bid. The Ivy League, which previously granted its automatic tournament bid to its regular season champion, hosted a postseason tournament to determine a conference champion for the first time. In previous years, had the Ivy League had two schools tied for first in the standings, a one-game playoff (or series as was the case in the 2002 season) determined the automatic bid. On March 10, 2016, the Ivy League's council of presidents approved a four-team tournament where the top four teams in the regular season would play on March 11 and 12 at Philadelphia's Palestra. The remaining 36 teams received "at-large" bids which are extended by the NCAA Selection Committee. On January 24, 2016, the NCAA announced that the Selection Committee would, for the first time, unveil in-season rankings of the top four teams in each division on February 11, 2017. Eight teams—the four lowest-seeded automatic qualifiers and the four lowest-seeded at-large teams—played in the First Four (the successor to what had been known as "play-in games" through the 2010 tournament). The winners of these games advanced to the main draw of the tournament. The Selection Committee also seeded the entire field from 1 to 68. The committee's selections resulted in two historic milestones. The Northwestern Wildcats of the Big Ten Conference made their first-ever NCAA Tournament in school history, officially becoming the last "power conference" school to make the tournament. (This fact is ironic considering that Northwestern hosted the first-ever NCAA Tournament in 1939). The Wildcats' First Round opponent, the Vanderbilt Commodores of the Southeastern Conference, also made history: with a record of 19–15, they set the mark for the most ever losses for an at-large team in tournament history. Four conference champions also made their first NCAA appearances: North Dakota (Big Sky Conference), UC Davis (Big West Conference), Jacksonville State (Ohio Valley Conference), and first-year Division I school Northern Kentucky (Horizon League). Schedule and venues The following sites were selected to host each round of the 2017 tournament First Four March 14 and 15 University of Dayton Arena, Dayton, Ohio (Host: University of Dayton) First and Second Rounds March 16 and 18 Amway Center, Orlando, Florida (Hosts: University of Central Florida and Stetson University) Bradley Center, Milwaukee (Host: Marquette University) KeyBank Center, Buffalo, New York (Host: Metro Atlantic Athletic Conference, Niagara University and Canisius College) Vivint Arena, Salt Lake City (Host: University of Utah) March 17 and 19 Bankers Life Fieldhouse, Indianapolis (Hosts: Horizon League and IUPUI) BOK Center, Tulsa, Oklahoma (Host: University of Tulsa) Bon Secours Wellness Arena, Greenville, South Carolina, (Hosts: Southern Conference and Furman University) Golden 1 Center, Sacramento, California (Host: California State University, Sacramento) Regional Semifinals and Finals (Sweet Sixteen and Elite Eight) March 23 and 25 Midwest Regional, Sprint Center, Kansas City, Missouri (Host: Big 12 Conference) West Regional, SAP Center, San Jose, California (Host: Pac-12 Conference) March 24 and 26 East Regional, Madison Square Garden, New York City (Hosts: St. John's University and Big East Conference) South Regional, FedExForum, Memphis, Tennessee (Host: University of Memphis) National Semifinals and Championship (Final Four and Championship) April 1 and 3 University of Phoenix Stadium, Glendale, Arizona (Host: Arizona State University) Qualification and selection Eight teams, out of 351 in Division I, were ineligible to participate in the 2017 tournament due to failing to meet APR requirements, self-imposed postseason bans, or reclassification from a lower division. Hawaii had previously been banned from entering the tournament as a penalty for infractions, but the NCAA later reversed its ban. Automatic qualifiers The following 32 teams were automatic qualifiers for the 2017 NCAA field by virtue of winning their conference's automatic bid. Notes Tournament seeds *See First Four Bracket All times are listed as Eastern Daylight Time (UTC−4) * – Denotes overtime period First Four – Dayton, Ohio Game Summaries East Regional – New York City, New York East Regional First Round East Regional Final East Regional all tournament team Sindarius Thornwell (Sr, South Carolina) – East Regional most outstanding player P. J. Dozier (So, South Carolina) KeVaughn Allen (So, Florida) Chris Chiozza (Jr, Florida) Nigel Hayes (Sr, Wisconsin) West Regional – San Jose, California West Regional First Round West Regional Final West Regional all tournament team Johnathan Williams (Jr, Gonzaga) – West Regional most outstanding player Trevon Bluiett (Jr, Xavier) J. P. Macura (Jr, Xavier) Jordan Mathews (Sr, Gonzaga) Nigel Williams-Goss (Jr, Gonzaga) Midwest Regional – Kansas City, Missouri Midwest Regional First Round Midwest Regional Final Midwest Regional all tournament team Jordan Bell (Jr., Oregon) – Midwest Regional most outstanding player Frank Mason III (Sr, Kansas) Dillon Brooks (Jr, Oregon) Tyler Dorsey (So., Oregon) Josh Jackson (Fr, Kansas) South Regional – Memphis, Tennessee South Regional Final South Regional all tournament team Luke Maye (So., North Carolina) – South Regional most outstanding player De'Aaron Fox (Fr, Kentucky) Isaac Humphries (So., Kentucky) Joel Berry II (Jr, North Carolina) Justin Jackson (Jr, North Carolina) Final Four During the Final Four round, regardless of the seeds of the participating teams, the champion of the top overall top seed's region (Villanova's East Region) plays against the champion of the fourth-ranked top seed's region (Gonzaga's West Region), and the champion of the second overall top seed's region (Kansas's Midwest Region) plays against the champion of the third-ranked top seed's region (North Carolina's South Region). University of Phoenix Stadium – Glendale, Arizona Final four National Championship Final Four all-tournament team Joel Berry II (Jr, North Carolina) – Final Four Most Outstanding Player Nigel Williams-Goss (Jr, Gonzaga) Justin Jackson (Jr, North Carolina) Kennedy Meeks (Sr, North Carolina) Zach Collins (Fr, Gonzaga) Record by conference The R64, R32, S16, E8, F4, CG, and NC columns indicate how many teams from each conference were in the round of 64 (first round), round of 32 (second round), Sweet 16, Elite Eight, Final Four, championship game, and national champion, respectively. The "Record" column includes wins in the First Four for the Big 12, Big West, NEC, and Pac-12 conferences and losses in the First Four for the ACC and Big East conferences. The MEAC and Southland each had one representative, both eliminated in the First Four with a record of 0–1. The America East, Atlantic Sun, Big Sky, Big South, CAA, Horizon, Ivy League, MAAC, MAC, Mountain West, Ohio Valley, Patriot, Southern, Summit, Sun Belt, SWAC, and WAC conferences each had one representative, eliminated in the First Round with a record of 0–1. Media coverage Television CBS Sports and Turner Sports held joint U.S. television broadcast rights to the Tournament under the NCAA March Madness brand. As part of a cycle beginning in 2016, CBS held rights to the Final Four and championship game. As CBS did not want its audience to be diffused across multiple outlets, there were no localized "Team Stream" telecasts of the Final Four or championship games on Turner channels as in previous years. Following criticism of the two-hour format of the 2016 edition, the Selection Sunday broadcast was shortened to 90 minutes. CBS Sports executive Harold Bryant promised that the unveiling of the bracket would be conducted in an "efficient" manner, and leave more time to discuss and preview the tournament. First Four – TruTV First and Second rounds – CBS, TBS, TNT, and TruTV Regional Semifinals and Finals (Sweet Sixteen and Elite Eight) – CBS and TBS National Semifinals (Final Four) and Championship – CBS Studio hosts Greg Gumbel (New York City and Glendale) – First Round, Second Round, Regionals, Final Four and National Championship Game Ernie Johnson Jr. (New York City, Atlanta, and Glendale) – First Round, Second Round, Regional Semi-Finals, Final Four and National Championship Game Casey Stern (Atlanta) – First Four, First Round and Second Round Adam Zucker (Glendale) – Final Four Studio analysts Charles Barkley (New York City and Glendale) – First Round, Second Round, Regionals, Final Four and National Championship Game Seth Davis (Atlanta and Glendale) – First Four, First Round, Second Round, Regional Semi-Finals, Final Four and National Championship Game Brendan Haywood (Atlanta) – First Four, First Round, Second Round and Regional Semi-Finals Clark Kellogg (New York City and Glendale) – First Round, Second Round, Regionals, Final Four and National Championship Game Jimmy Patsos (Atlanta) – Second Round Bruce Pearl (Atlanta) – First Round Kenny Smith (New York City and Glendale) – First Round, Second Round, Regionals, Final Four and National Championship Game Steve Smith (Glendale) – Final Four Wally Szczerbiak (New York City and Atlanta) – First Four, Second Round Buzz Williams (Atlanta) – Regional Semi-Finals Jay Wright (Glendale) – Final Four Commentary teams Jim Nantz/Bill Raftery/Grant Hill/Tracy Wolfson – First and Second Rounds at Indianapolis, Indiana; South Regional at Memphis, Tennessee; Final Four and National Championship at Glendale, Arizona Brian Anderson/Chris Webber or Clark Kellogg/Lewis Johnson – First Four at Dayton, Ohio (Tuesday); First and Second Rounds at Greenville, South Carolina; West Regional at San Jose, California Kellogg called the First Four (Tuesday) with Webber doing the First, Second and Regionals. Verne Lundquist/Jim Spanarkel/Allie LaForce – First and Second Rounds at Buffalo, New York; East Regional at New York City, New York Kevin Harlan/Reggie Miller/Dan Bonner/Dana Jacobson – First and Second Rounds at Tulsa, Oklahoma; Midwest Regional at Kansas City, Missouri Ian Eagle/Steve Lavin/Evan Washburn – First and Second Rounds at Orlando, Florida Spero Dedes/Steve Smith/Len Elmore/Rosalyn Gold-Onwude – First Four at Dayton, Ohio (Wednesday); First and Second Rounds at Sacramento, California Andrew Catalon/Steve Lappas/Jamie Erdahl – First and Second Rounds at Salt Lake City, Utah Carter Blackburn/Mike Gminski/Debbie Antonelli/Lisa Byington – First and Second Rounds at Milwaukee, Wisconsin Radio Westwood One had exclusive radio rights to the entire tournament. For the first time in the history of the tournament, broadcasts of the Final Four and championship game were available in Spanish. First Four Ted Emrich and Austin Croshere – at Dayton, Ohio First and Second rounds Scott Graham and Donny Marshall – Buffalo, New York Brandon Gaudin and Kelly Tripucka – Milwaukee, Wisconsin Tom McCarthy and Will Perdue – Orlando, Florida Kevin Kugler and Dan Dickau – Salt Lake City, Utah John Sadak and Eric Montross/John Thompson – Greenville, South Carolina (Montross – Friday night; Thompson – Friday Afternoon & Sunday) Chris Carrino and Jim Jackson – Indianapolis, Indiana Craig Way and P. J. Carlesimo – Tulsa, Oklahoma Jason Benetti and Mike Montgomery – Sacramento, California Regionals Ian Eagle and Donny Marshall – East Regional at New York City, New York Tom McCarthy and Will Purdue – Midwest Regional at Kansas City, Missouri Gary Cohen and P. J. Carlesimo – South Regional at Memphis, Tennessee Kevin Kugler and Jim Jackson – West Regional at San Jose, California Final four Kevin Kugler, Clark Kellogg, and Jim Gray – Glendale, Arizona Internet Video Live video of games was available for streaming through the following means: NCAA March Madness Live (website and app, no CBS games on digital media players; access to games on Turner channels requires TV Everywhere authentication through provider) CBS All Access (only CBS games, service subscription required) CBS Sports website and app (only CBS games) Bleacher Report website and Team Stream app (only Turner games, access requires subscription) Watch TBS website and app (only TBS games, requires TV Everywhere authentication) Watch TNT website and app (only TNT games, requires TV Everywhere authentication) Watch truTV website and app (only truTV games, requires TV Everywhere authentication) Websites and apps of cable, satellite, and OTT providers of CBS & Turner (access requires subscription) Audio Live audio of games was available for streaming through the following means: NCAA March Madness Live (website and app) Westwood One Sports website TuneIn (website and app) Websites and apps of Westwood One Sports affiliates See also 2017 NCAA Division II Men's Basketball Tournament 2017 NCAA Division III Men's Basketball Tournament 2017 NCAA Division I Women's Basketball Tournament 2017 NCAA Division II Women's Basketball Tournament 2017 NCAA Division III Women's Basketball Tournament 2017 National Invitation Tournament 2017 Women's National Invitation Tournament 2017 NAIA Division I Men's Basketball Tournament 2017 NAIA Division II Men's Basketball Tournament 2017 NAIA Division I Women's Basketball Tournament 2017 NAIA Division II Women's Basketball Tournament 2017 College Basketball Invitational 2017 CollegeInsider.com Postseason Tournament Notes References External links Ncaa Tournament NCAA Division I Men's Basketball Tournament Basketball in Arizona 2017 in sports in Arizona
248005
https://en.wikipedia.org/wiki/IBM%20i
IBM i
IBM i (the i standing for integrated) is an operating system developed by IBM for IBM Power Systems. It was originally released in 1988 as OS/400, as the sole operating system of the IBM AS/400 line of systems. It was renamed to i5/OS in 2004, before being renamed a second time to IBM i in 2008. It is an evolution of the System/38 CPF operating system, with compatibility layers for System/36 SSP and AIX applications. It inherits a number of distinctive features from the System/38 platform, including the Machine Interface, the implementation of object-based addressing on top of a single-level store, and the tight integration of a relational database into the operating system. History Origin OS/400 was developed alongside the AS/400 hardware platform beginning in December 1985. Development began in the aftermath of the failure of the Fort Knox project, which left IBM without a competitive midrange system. During the Fort Knox project, a skunkworks project was started at Rochester by engineers, who succeeded in developing code which allowed System/36 applications to run on top of the System/38, and when Fort Knox was cancelled, this project evolved into an official project to replace both the System/36 and System/38 with a single new hardware and software platform. The project became known as Silverlake (named for Silver Lake in Rochester, Minnesota). The operating system for Silverlake was codenamed XPF (Extended CPF), and had originally begun as a port of CPF to the Fort Knox hardware. In addition to adding support for System/36 applications, some of the user interface and ease-of-use features from the System/36 were carried over to the new operating system. Silverlake was available for field test in June 1988, and was officially announced in August of that year. By that point, it had been renamed to the Application System/400, and the operating system had been named Operating System/400. The move to PowerPC The port to PowerPC required a rewrite of most of the code below the TIMI. Early versions of OS/400 inherited the Horizontal and Vertical Microcode layers of the System/38, although they were renamed to the Horizontal Licensed Internal Code (HLIC) and Vertical Licensed Internal Code (VLIC) respectively. The port to the new hardware replaced IMPI and the associated microcode, which required the VLIC to be rewritten to target PowerPC instead of IMPI, and for the operating system functionality previously implemented in the HLIC microcode to be re-implemented elsewhere. This led to the HLIC and VLIC being replaced with a single layer named the System Licensed Internal Code (SLIC). The SLIC was implemented in an object-oriented style with over 2 million lines of C++ code, replacing all of the HLIC code, and most of the VLIC code. Owing to the amount of work needed to implement the SLIC, IBM Rochester hired several hundred C++ programmers for the project, who worked on the SLIC in parallel to new revisions of the VLIC for the CISC AS/400 systems. The first release of OS/400 to support PowerPC-based hardware was V3R6. Rebranding The AS/400 product line was rebranded multiple times throughout the 1990s and 2000s. As part of the 2004 rebranding to eServer i5, OS/400 was renamed to i5/OS; the 5 signifying the use of POWER5 processors. The first release of i5/OS, V5R3, was described by IBM as "a different name for the same operating system". In 2006, IBM rebranded the AS/400 line one last time to System i. In April 2008, IBM consolidated the System i with the System p platform to create IBM Power Systems. At the same time, i5/OS was renamed to IBM i, in order to remove the association with POWER5 processors. The two most recent versions of the operating system at that time, which had been released as i5/OS V5R4 and V6R1, were renamed to IBM i 5.4 and 6.1. Along with the rebranding to IBM i, IBM changed the versioning nomenclature for the operating system. Prior releases used a Version, Release, Modification scheme, e.g. V2R1M1. This was replaced with a Version.Release scheme, e.g. 6.1. Beginning with IBM i 7.1, IBM replaced the Modification releases with Technology Refreshes. Technology Refreshes are delivered as optional PTFs for specific releases of the operating system which add new functionality or hardware support to the operating system. Architecture When IBM i was first released as OS/400, it was split into two layers, the hardware-dependent System Licensed Internal Code (SLIC) and the hardware-independent Extended Control Program Facility (XPF). These are divided by a hardware abstraction layer called the Technology Independent Machine Interface (TIMI). Later versions of the operating system gained additional layers, including an AIX compatibility layer named Portable Application Solutions Environment (originally known as the Private Address Space Environment), and the Advanced 36 Machine environment which ran System/36 SSP applications in emulation. IBM often uses different names for the TIMI, SLIC and XPF in documentation and marketing materials, for example, the IBM i 7.4 documentation refers to them as the IBM i Machine Interface, IBM i Licensed Internal Code and IBM i Operating System respectively. TIMI The TIMI isolates users and applications from the underlying hardware. This isolation is more thorough than the hardware abstractions of other operating systems, and includes abstracting the instruction set architecture of the processor, the size of the address space and the specifics of I/O and persistence. This is accomplished through two interrelated mechanisms: Compilers for IBM i do not generate native machine code directly, instead they generate a high level intermediate representation defined by the TIMI. When a program is run, the operating system carries out ahead-of-time translation of the TIMI instructions into native machine code for the processor, and stores the generated machine code for future execution of the program. If the translation process changes, or a different CPU instruction set is adopted, the operating system can transparently regenerate the machine code from the TIMI instructions without needing to recompile from source code. Instead of operating on memory addresses, TIMI instructions operate on objects. All data in IBM i, such as data files, source code, programs and regions of allocated memory, are encapsulated inside objects managed by the operating system (c.f. the "Everything is a file" model in Unix). IBM i objects have a fixed type, which defines the set of applicable operations which may be carried out on them (for example, a Program object can be executed, but cannot be edited). The object model hides whether data is stored in primary, or secondary storage. Instead, the operating system automatically handles the process of retrieving and then storing the changes to permanent storage. The hardware isolation provided by the TIMI allowed IBM to replace the AS/400's 48-bit IMPI architecture with the 64-bit RS64 architecture in 1995. Applications compiled on systems using the IMPI instruction set could run on top of the newer RS64 systems without any code changes, recompilation or emulation, while also allowing those applications to avail of 64-bit addressing. There are two different formats of TIMI instructions, known as the Original Machine Interface (OMI) and New Machine Interface (NMI) formats. OMI instructions are essentially the same as the System/38 Machine interface instructions, whereas NMI instructions are lower-level, resembling the W-code intermediate representation format used by IBM's compilers. IBM partially documents the OMI instructions, whereas the NMI instructions are not officially documented. OMI instructions are used by the original AS/400 compilers, whereas NMI instructions are used by the Integrated Language Environment compilers. During the PowerPC port, native support for the OMI format was removed, and replaced with a translator which converted OMI instructions into NMI instructions. The storing of the TIMI instructions alongside the native machine code instructions is known as observability. In 2008, the release of i5/OS V6R1 (later known as IBM i 6.1) introduced a number of changes to the TIMI layer which caused problems for third-party software which removed observability from the application objects shipped to customers. SLIC The SLIC consists of the code which implements the TIMI on top of the IBM Power architecture. In addition to containing most of the functionality typically associated with an operating system kernel, it is responsible for translating TIMI instructions into machine code, and it also implements some high level functionality which is exposed through the TIMI, such as IBM i's integrated relational database. The SLIC implements IBM i's object-based storage model on top of a single-level store addressing scheme, which does not distinguish between primary and secondary storage, and instead manages all types of storage in a single virtual address space. The SLIC is primarily implemented in C++, and replaced the HLIC and VLIC layers used in versions of OS/400 prior to V3R6. XPF The XPF consists of the code which implements the hardware-independent components of the operating system, which are compiled into TIMI instructions. Components of the XPF include the user interface, the Control Language, data management and query utilities, development tools and system management utilities. The XPF also contains the System/36 Environment and System/38 Environment, which provide backwards compatibility APIs and utilities for applications and data migrated from SSP and CPF systems. The XPF is IBM's internal name for this layer, and as the name suggests, began as an evolution of the System/38 Control Program Facility. The XPF is mostly implemented in PL/MI, although other languages are also used. PASE PASE provides binary compatibility for user mode AIX executables which do not interact directly with the AIX kernel, and supports the 32-bit and 64-bit AIX Application Binary Interfaces. PASE was first included in a limited and undocumented form in the V4R3 release of OS/400 to support a port of Smalltalk. It was first announced to customers at the time of the V4R5 release, by which time it had gained significant additional functionality. PASE consists of the AIX userspace running on top of a system call interface implemented by the SLIC. The system call interfaces allows interoperability between PASE and native IBM i applications, for example, PASE applications can access the integrated database, or call native IBM i applications, and vice versa. During the creation of PASE, a new type of single level storage object named a Teraspace was added to the operating system, which allows each PASE process to have a private 1TiB space which is addressed with 64-bit pointers. This was necessary since all IBM i jobs (i.e. processes) typically share the same address space. PASE applications do not use the hardware-independent TIMI instructions, and are instead compiled directly to Power machine code. PASE is distinct from the Qshell environment, which is an implementation of a Unix shell and associated utilities built on top of IBM i's native POSIX-compatible APIs. Features Database management IBM i features an integrated relational database currently known as IBM Db2 for IBM i. The database evolved from the non-relational System/38 database, gaining support for the relational model and SQL. The database originally had no name, instead it was described simply as "data base support". It was given the name DB2/400 in 1994 to indicate comparable functionality to IBM's other commercial databases. Despite the Db2 branding, Db2 for IBM i is an entirely separate codebase to Db2 on other platforms, and is tightly integrated into the SLIC layer of IBM i as opposed to being an optional product. IBM i provides two mechanisms for accessing the integrated database - the so-called native interface, which is based on the database access model of the System/38, and SQL. The native interface consists of the Data Description Specifications (DDS) language, which is used to define schemas and the OPNQRYF command or QQQQRY query API. Certain Db2 for i features such as object-relational database management require SQL and cannot be accessed through the native interface. IBM i has two separate query optimizers known as the Classic Query Engine (CQE) and SQL Query Engine (SQE). These are implemented inside the SLIC alongside a Query Dispatcher which selects the appropriate optimizer depending on the type of the query. Remote access through the native interface and SQL is provided by the Distributed Data Management Architecture (DDM) and Distributed Relational Database Architecture respectively. A storage engine for MySQL and MariaDB named IBMDB2I allows applications designed for those databases to use Db2 for i as a backing store. Other open source databases have been ported to IBM i, including PostgreSQL, MongoDB and Redis. These databases run on the PASE environment, and are independent of the operating system's integrated database features. Networking IBM i supports TCP/IP networking in addition to the proprietary IBM Systems Network Architecture. IBM i systems were historically accessed and managed through IBM 5250 terminals attached to the system with twinax cabling. With the decline of dedicated terminal hardware, modern IBM i systems are typically accessed through 5250 terminal emulators. IBM provides two terminal emulator products for IBM i: IBM i Access Client Solutions is a Java-based client that runs on Linux, macOS and Windows to provide 5250 emulation. IBM i Access for Web/Mobile provides web-based 5250 emulation. In addition, IBM provides a web-based management console and performance analysis product named IBM Navigator for i. Open-source Some of the open source applications ported to IBM i include: Apache HTTP Server Java Node.js OpenSSL Git gcc nginx PHP Python Ruby Lua R MariaDB MySQL Perl Redis MongoDB PostgreSQL Vim Open source software for IBM i is typically packaged using the RPM package format, and installed with the YUM package manager. YUM and RPM replaced the 5733-OPS product, which was previously used to install open source software on IBM i. Ports of open source software to IBM i typically target PASE instead of the native IBM i APIs in order to simplify porting. Programming Programming languages available from IBM for IBM i include RPG, Control Language, C, C++, Pascal, Java, EGL, Smalltalk, COBOL, BASIC, PL/I and REXX. The Integrated Language Environment (ILE) allows programs from ILE compatible languages (C, C++, COBOL, RPG, and CL), to be bound into the same executable and call procedures written in any of the other ILE languages. When PASE was introduced, it was necessary to compile code for PASE on an AIX system. This requirement was removed in OS/400 V5R2 when it became possible to compile code using the IBM XL compiler suite inside PASE itself. Since then, other compilers have been ported to PASE, including gcc. IBM systems may also come with programming and development software such as the Programming Development Manager. IBM provides an Eclipse-based integrated development environment for IBM i named IBM Rational Developer for i. IBM i uses EBCDIC as the default character encoding, but also provides support for ASCII, UCS-2 and UTF-16. Storage In IBM i, disk drives may be grouped into an auxiliary storage pool (ASP) in order to organize data to limit the impact of storage-device failures and to reduce recovery time. If a disk failure occurs, only the data in the pool containing the failed unit needs to be recovered. ASPs may also be used to improve performance by isolating objects with similar performance characteristics, for example journal receivers, in their own pool. By default, all disk drives are assigned to pool 1. The concept of IBM i pools is similar to the Unix/Linux concept of volume groups; however, with IBM i it is typical for all disk drives to be assigned to a single ASP. Security IBM i was one of the first general-purpose operating systems to attain a C2 security rating from the NSA. Support for C2 level security was first added in the V2R3 release of OS/400. Release timeline See also Comparison of operating systems Library (IBM i) Object (IBM i) References External links IBM i site IBM i Documentation IT Jungle - IBM i news website MC Press Online - IBM Midrange Computer news website 1988 software AS/400 IBM operating systems Object-oriented operating systems Power ISA operating systems
668752
https://en.wikipedia.org/wiki/Plok%21
Plok!
Plok! is a side-scrolling platform game developed by British studio Software Creations and its concepts and characters created and owned by Ste and John Pickford; it was released for the Super Nintendo Entertainment System (SNES) in late 1993 by Tradewest in North America, Nintendo in Europe, and Activision in Japan. Players portray the hood-headed titular protagonist, the king of the island Akrillic who is protecting it from fleas spawned by the Flea Queen, who is under the island's ground, as well as other bosses trying to overthrow Plok's power. His versatility lies in his four separable limbs, which can be used to shoot at targets and enemies (although later segments require some being temporarily sacrificed), and several power-ups scattered throughout its colorful stages as "presents." Plok!s history began in the late 1980s as a self-funded coin-op project by the two Pickford brothers named Fleapit; they worked on it while at Zippo Games and programmed it for Rare's 'Razz Board' hardware. It was cancelled in 1990 following the closure of Zippo Games, but revived as a SNES game developed at Software Creations after the Pickfords were promoted to higher positions, Ste becoming art director and John producer. Software Creations self-funded the game with the Pickfords owning intellectual property rights. Plok! was positively received by critics, who praised its innovative ideas, variety, presentation, versatile playable character, and level design; however, some expressed skepticism of it being the same cutesy colorful platform title as many others, and the Pickfords attributed its underwhelming sales to market saturation of mascot platformers. Gameplay Plok! has two difficulty modes: "Normal" which is the entire game, and "Child's Play" which omits the harder stages and decreases the speed and hit points of enemies. "Plokontinues" are earned if four P-L-O-K tokens are collected, and a token is garnered if a level is finished without dying; if a Game Over occurs, the player will restart with a 0 score, three lives, and at the place the Plokontinue was received. Saving is limited to two "Permanent Continue Positions" after the completion of Bobbins Brothers and Rockyfella boss fights which the player can go back to every time the game is reset, and there is no password feature. Plok can launch any of his four limbs (two arms and two legs) at will to damage enemies as projectiles from his arms and feet. While a limb instantly returns to Plok's body if it hits an enemy, it takes longer if it does not collide with anything. Limbs can also fire through certain barriers like rock pillars. The only limitation is that he cannot shoot limbs while doing a somersault. In later levels, some puzzles involve having to "sacrifice" one of Plok's limbs to activate scenery-changing targets. Once a limb hits a switch, it is placed on a hanger that's usually next to the target; some targets also require certain limbs. Once Plok is limbless, he moves by bouncing and becomes harder to control. Plok's secondary attack takes the form of "Speed Blade," a buzzsaw-like jump that not only gives him increased speed as items, but also dispatches enemies at the highest damage. Plok can pick up shells, which award extra lives and serve as ammunition for a special amulet received partway through the game. The amulet converts shells into power for Plok's secondary attack. There are also "Magic Fruits" that can heal Plok's health; the more he punches a fruit, the bigger it gets and more health he receives. Less frequent are bigger "Golden Fruits" that increase Plok's energy to its fullest. In some levels, some fruits can either send Plok to a timed bonus stage where he must reach a goal, or a strange room where he must collect all the shells to progress. Completing these will have Plok warp to the next level - some of them can possibly skip boss fights. "Presents" temporarily upgrades Plok into a character from his favourite movies, such as "Plocky" which equips him with boxing gloves, "Vigilante Plok" which makes him a flamethrower-equipped character a la the "Ploxterminator," the Deerstalker-wearing "Squire Plok" equipped with a blunderbuss, a gun-slinging "Cowboy Plok" based on westerns, and "Rocket Plok" where is equipped with multiple rocket launchers. Presents in the Fleapit yield high-tech, futuristic vehicles with weapons, such as a unicycle with a water canon, an "off-road truck" with rocket launcher, jetpack with lasers, motorbike with grenades, tank, bomb-dropping helicopter, Flying saucer with plasma cannons, and a pair of spring pogo shoes. The player encounters a number of different enemies, such as the aforementioned fleas. For fleas, in order for the player to complete the level, Plok must destroy every single one them in every level (with the exceptions of Cotton Island, Legacy Island, and Fleapit), where the last one drops a flag which levitates to its flag pole and allows the player to progress. Plot As king, Plok dwells on Akrillic, a large island in the fictional archipelago of Polyesta. Plok wakes up one morning and notices his big square flag on the pole on his house's rooftop has been stolen, and goes out searching for it. He spots the flag on Cotton Island from far away and sails to it to find the flag. Plok mistakes some imposter flags for his own, becoming irritated in the process. Plok encounters the two giants creatures who are responsible for placing the fake flags, the Bobbins Brothers, who Plok's grandpappy had warned him about, and fights them. Although successfully defeating the Bobbins Brothers and getting his big square flag back, Plok sails back to Akrillic to find the island has been infested and overtaken by "fleas", two-legged blue insects that hatch from eggs and hop around; Plok learns that the theft of his large flag was simply a decoy to lure him away from Akrillic. Plok travels through Akrillic, defeating every single flea on the surface in order to reclaim his island. Partway through the game, Plok places the big square flag back where it belongs outside his home, and then takes a break as he sits on a foot of the statue of his grandfather, Grandpappy Plok, wishing he had found an amulet to help him deal with the fleas. He takes a nap, and has an odd dream of his grandfather's search for an amulet 50 years ago. In the dream, the player plays as Grandpappy Plok as he sails from Akrillic to Legacy Island, and shares the same experience of what his grandson is doing now: traveling through bizarre obstacles, discovering artifacts (including Rockyfella), and dealing with the same Bobbin Brothers as well as their third brother Irving. Having defeated them, Grandpappy finally dug up an amulet and sailed back to Akrillic victoriously. Back to the present day, Plok wakes up and discovers that the amulet is located at the bottom of the statue. Plok's mission on getting his island back continues, now with an ability where he can turn himself into a saw. Plok then heads into various of locations around the island while facing other creatures trying to overthrow Akrillic: Penkinos, a group of inflatable, floating magicians of mysterious origin living in the North of the island; Womack, a spider living in the island's center with its long legs being weakpoints; and Rockyfella, the spirit of the island's soil residing under in the mountains of the island's southeast who is vengeful against Plok for the flagpoles he dug in the ground. After clearing out all the enemies on the island, Plok journeys into the source of the fleas, the Fleapit, where he uses various weapon vehicles. He faces off with the leader of the fleas who hatched them, the Flea Queen, using a high-tech "secret Super-Vehicle" armed with bug spray to defeat her. After the showdown, he returns home for a sleep on his green chair. Development 1988: Rare hires Zippo Games In the summer of 1988, Ste and John Pickford, workers at Zippo Games which was owned by John and their friend Steve Hughes, was struggling to complete games for 16-bit Atari ST and Amiga computers, many getting canceled. They learned Rare was looking for developers of Nintendo Entertainment System (NES) titles, which the console had yet to gain steam in the United Kingdom; despite finding the 8-bit system "horribly overpriced and terribly underpowered", the Pickfords took the offer due to financial necessity and their love of the works of Rare founders Tim and Chris Stamper's previous company Ultimate Play the Game. Zippo Games was the only company to have been consulted by Rare for NES work. Despite initial skepticism against the NES, the Pickfords' experience working on Ironsword, as well as playing Mario and Zelda games on the console as required by the Stamper brothers, gained them an appreciation for the console and changed their development focus from being on technical finesse to the player's experience, which is what they had for Plok!. 1988–1989: Fleapit John, while Ste was drawing the shop screen for Ironsword, came up with an idea of a character with a hangman's hood; Ste doodled it in a small margin of the layout. The detachable limbs were also conceived in the sketch, as below the character was an arrow pointing to a pile of his parts. John then thought of a game where the player had to capture all jumping fleas in a land, which the two decided to have the hangman be the playable character. Plok's name was then spontaneously coined when John was placing letters from a cassette tape cover on his computer mouse; his full name was "Plok the Exploding Man" to describe his separable design. After more motivation from John, Ste moved to clear sheets of paper to do more sketches, where the character's colors, poses, size (32x40) and "extreme expressions" involving an opening mouth were conceived. After the formation of these ideas, Chris Stamper went to Zippo Games to introduce a coin-op hardware he designed named the Razz Board. In addition to being impressed by the hardware's innovative graphics system, the Pickfords also saw an opportunity to make a "real arcade game" like the ones in their youth; they previously worked on an Amiga-based coin-op system for World Darts, but never a "real" arcade title. As a result, the Pickfords made a deal with Rare to use the hardware to produce their dream game as well as to own the game property and receive a big chunk of revenue from board sales; however, the deal also made all of the funding up to the brothers. The project was named Fleapit. Ste vaguely recalled Fleapit in a 2014 interview, comparing its gameplay to Excitebots: Trick Racing (2009) in that the player had to catch one of several flying footballs to score a touchdown in the middle of a level; he reported the product being only a little more "primitive" than Plok!, featuring stages only scrolling horizontal and vertically, more "set pieces or one-off levels," backgrounds made of food like sausages and doughnuts, and a space level. As development went on, Rare was profiting mostly from selling NES titles, making them less focused on Razz Board games such as Fleapit. In 1989, Rare, the first company outside of Japan to have a Game Boy, instructed Zippo Games to develop a game for the handheld console; however, this meant stopping the development of Fleapit. Rare and Zippo Games' relationship worsened; Zippo never completed a Game Boy product as instructed and was bought out by Rare in 1990 to be renamed Rare Manchester, which was shut down. Fleapit was cancelled half-completed in 1990 as a result of Zippo Games closing and funds running out; Ste admitted in 2014 a desire to produce a coin-op caused improper business decisions, such as rejecting more financially-rewarding console work. Late in Fleapits development, the brothers obtained comedian Chris Sievey to voice Plok, contacting him in 1990 via the number on the name of one of his songs, "969 1909." He encountered Sievey in his Franke Sidebottom costume at The Ritz venue, and offered to work for free; however, a short time later, Fleapit was cancelled, and Sievey never voiced the character. 1990–1993: From Zippo to Software Creations After leaving Zippo, John phoned Richard Hay, the head of Software Creations, the first company outside of Japan to have a development kit for the Super Nintendo Entertainment System (SNES). Hay hired the Pickfords right away to use it to design and program Equinox (1993), a sequel to the NES title Solstice (1990). The Pickfords were eventually promoted to higher positions, Ste becoming art director and John producer. On the side, the Pickfords continued working on Fleapit, coming up with new characters and locations as well as doing more pitch illustrations. Near the completion of Equinox, the Pickfords showcased Fleapit to Hay, pitching it under a new name: Plok! While the brothers and Hay considered many consoles, such as the NES, Super NES, and Game Boy, only a Super NES title came into fruition. Development once again involved self-funding, albeit on the part of Software Creations instead of just the Pickfords. Plok!s team consisted of the two brothers as producers, art directors and designers; John Buckley as programmer; Lyndon Brooke as graphic artist; another set of brothers, Tim and Geoff Follin, developing the music and sound; Kevin Edwards and Stephen Ruddy as compressors; Dan Whitworth creating additional graphics; and an 18-year-old Chun Wah Kong joining in spring 1993 as tester. Software Creations, for some of their games, hired animators recently laid-off from a Manchester studio, and Whitworth was no exception; he animated the title screen (where Plok plays a harmonica to the theme song) while Ste was on a two-week holiday, and Whitworth's work was enthusiastically-received by the staff. Plok! was the Pickford brothers' first experience as project managers, as well as the first time they collaborated with other developers to turn their own ideas into a finished product. Buckley later admitted to Plok! being his most "proud" work. and Kong admitting his experience as the game's tester prepared him for being lead designer on Team Soho's PlayStation 2 title The Getaway (2002): "QA is great grounding for designers. It makes you think critically about how players approach your level; how to reward curiosity if the player wanders off the beaten track; how different approaches could break the game." Although Pickfords allowed Buckley and Brooke to offer many contributions, development was not without conflict, the biggest argument occurring late in development regarding difficulty; while Buckley and Brooks felt it was proper, the Pickfords found it overbearing. To settle the dispute, the first eight stages which were originally for Cotton Island were moved later as the Grandpappy Plok dream stages, with eight new easier Cotton Island stages created. According to Buckley, dream levels were conceived to vary the pacing, particularly with the Amulet. Kong also reported one of the publishers wanting the first level's difficulty decreased, so the staff decreased the bouncing sprouts' hit points from two to one; however, in the tutorial segment of the final game, the sprouts take two hits as Plok fires an extra arm. While Software Creations did have collectable continues (named "Plokontinues") for Plok!, it did not produce a save battery due to being too costly, and passwords were scrapped out of fear gaming magazines would spread them around. Kong recalled most of the game was completed close to the end of Equinoxs testing. Graphics and art Many of Plok!s visuals were formulated during its Fleapit stage, where it was being programmed for the Razz Board. The hardware had an unusual system that executed higher-depth graphics and performed better with less data; unlike other hardwares that use bitmap grids that encode colors and transparency through binary numbers, the Razz Board hardware stored pixels as bytes, with the first six bits determining color, the seventh bit setting its vertical position relative to its predecessor, and the eighth its horizontal. For instance, using a bitmap system to create a 32-pixel line requires a grid of 1,024 pixels (32x32) with 992 of them transparent, while making the same line took only 32 bytes in Razz Board. This meant sprites weren't restrained to perfectly square sizes, which Ste took advantage of when creating the text font although setting an arbitrary limit of 22x29. For the SNES game, Brooke designed two new fonts for 16x16 and 8x8 bitmap grids: one based on Ste's font for Fleapit, in turn based on his lettering in concept drawings, and another for the silent movie-esque screens in the Legacy Island levels. Software Creations' leader and Plok! executive producer Mike Webb reported compressing 50% on the game's 16-megabit graphics data (equivalent to Street Fighter II) to eight megabits. Fleapit was the Pickfords' first game where Ste created concept art, in order to be more professional and adapt to an increase in presentation to buyers in the industry; according to Ste, this changed the process of green-lighting a game just by starting programming: "Everyone in the industry was self taught, and there were no standards or expectations of how a new game should progress. I remember it being a struggle to justify spending work time drawing pretty pictures which wouldn't actually contribute to the game." Ste added shades to the concept art using Letratone as an easy method to a businesslike aesthetic. Ste drew the illustrations in black and white, due to being focused more on shape than on color and photocopying technology only allowing monochrome prints. The color scheme was formulated originally for an 8-bit arcade title; to achieve a "natural" look, colors were scrunched into extreme areas of the RGB color wheel. When the project turned into a 16-bit SNES game, the color scheme (although generally the same) was more detailed with a wider palette, and Ste used magic markers to color concept art; it was used not only for the game but also for other titles and products of a potential franchise and as illustrations for the instruction manual. John's first plan for taking the most advantage of the Razz Board was with how Plok was animated; since he had separable limbs, all of his six parts were independently-moving static sprites. The separated limbs of Plok made him easy to animate and took less graphical data, as static parts could be coded to move and rotate instead of creating multiple frames. Ste also had Plok's boots and gloves be gigantic in order for his detachable-limb design to work to its full potential. The fleas, although animated traditionally with frames, were only two legs to make animating extreme poses simpler and for the sprite to not be too busy. Ste reported "5 or 6" frames for all of the fleas' movements. For the SNES console, Brooke altered the design of Plok and the fleas only slightly, but Plok's animation method was transformed; the SNES has the traditional bitmap method of executing pixels, thus he was animated frame-by-frame by Brooke. Concepts and design Fleapit characters not in Plok! include Armstrong, a "mini-version" of Rockyfella; and Suki, a manga-style character who ran an item shop. The Bobbins Brothers and Womack were transferred from Fleapit (although Womack went through a massive redesign for by Brooke that Ste appreciated) while the Penkinos was originally-conceived for Plok!. The "floating" limb movements were coded in John's first prototype of Fleapit, with the idea of removing limbs incorporated later on as the Pickfords experimented. The idea of costume and vehicle power-up had also been around since Fleapit, although the set of power-ups changed with only the helicopter carrying over; Fleapit-only power-ups included "Robo Plok," a robot form that improved maneuverability and was inspired by Ro-Jaws from the 2000 AD comics; "Ninja Plok" which armed him with an endless amount of throwing stars; and "Super Plok," a superhero with flying abilities. Plok also had the "smart bomb," an ability where he enraged himself to the pointing of exploding and wiping out all enemies on the screen (as well as his own limbs). By the start of development of the SNES game, Buckley didn't have Fleapits code; thus, he started programming out of what he saw in the VHS footage, including Plok attacking fleas with his limbs and skid-ing down slanted platforms. Brooke conceived triggering parts of the scenery with limbs, reasoning they "looked a bit rough" when constantly looping; however, carrying it out was difficult due to affecting the collision detection and causing Plok's movements to be jolty. He came up with the limb-holding coat hangers two weeks later. The vehicles were left for the final stages, as incorporating them into earlier levels would've taken too much work to "balance" them with those stages' other aspects. Audio When it came to NES and SNES titles, Software Creations was notable for pushing the limits of a hardware's sound, including with Plok! Work on Plok!s soundtrack, composed by Tim and Geoff Follin, began halfway into development. It was categorized by Nintendo World Report as typical of other kid-friendly platform games, although "a bit manic, deceptively bombastic, and diverse in tone". Ste reported Geoff doing around 75% of the work and being far more open for discussion with the team, and Tim being more "elusive". Plok! continues the Follin brothers' incorporation of old rock music influences; "Beach," for instance, was inspired by the works of Stevie Wonder, its guitar solo influenced by Queen guitarist Brian May. Tim Follin composed the title song out of a guitar-played two-chord progression of "Tequila" by rock and roll group The Champs, as two guitar chord samples could fit within memory. "Lead" instruments such as an electric guitar and harmonicas were made out of simple waveforms, with the guitar's wave identical to a square. A breathing sample in the boss theme was also used in Equinox and Spider-Man and the X-Men in Arcade's Revenge (1992). The soundtrack has been released to physical formats twice, on July 27, 2019 to cassette by CANVAS Ltd. and as a limited-edition 500-copy double vinyl on Record Store Day (18 April) 2020 by Respawn Records; Respawned's release had notes and artwork by the Pickfords and both 180gram vinyls colored red and yellow. Rare composer David Wise has expressed admiration for Plok!, claiming his work on Donkey Kong Country to be inspired by the Follin brothers' soundtrack. Release Publishing Software Creations was so confident in Plok! the developer tried to persuade Nintendo to publish it; reception from the corporation's Japan and American branches was positive. Shigeru Miyamoto expressed a strong interest in working on the game; in fact, when Plok! was only half-finished, he wrote a letter to Hay claiming it was the third-best platform game below Sonic and Mario and that he would make it the second-greatest in the genre above Sonic but below Mario. Tony Harman, product acquisitions and development manager at Nintendo of America, also attended the studio to play Plok! four weeks within its development, and told Kay Miyamoto was amazed by the game's audio to the point of analyzing it. For unknown reasons, however, Nintendo only published Plok! in Europe, while Tradewest published it in North America and Activision in Japan. Although reasons for Nintendo's rejection haven't been disclosed, Ste inductively reasoned Miyamoto found it too similar to a Super Mario World sequel in development, particularly with its mini-race levels. Promotion The Pickfords, despite owning the IP and developing Plok!, were never consulted on promotion. Tradewest set Plok!s primary demographic at gamers six-to-fourteen years old. Although published by Tradewest in the United States, Plok! still received promotion from Nintendo of America; a complete guide and review was published in the October 1993 issue of its magazine Nintendo Power, and a two-page section in a Nintendo Player's Guide book for Mario Paint (1993) included templates of body part stamps to animate Plok and the flea enemies, and a background painting based on one of the Cotton Island levels. In the United Kingdom, Nintendo released a VHS tape, Super Mario All-Stars Video (1993); hosted by Craig Charles, the video features a segment of testers at Nintendo UK promoting upcoming games, including Plok! Club Nintendo, Germany's official Nintendo magazine, ran a comic strip where Plok races with Mario at an olympic match in space. Some print advertising was criticized in later years by Ste for not showing enough of the game itself. Tradewest's US print advertisement depicted the top half taken up by Double Dragon and Battletoads, the bottom-half a picture of the Plok character; the only info revealed was that Plok! was published by the two other franchises. Nintendo's UK print ad depicted a faux desktop screenshot with an image of an angry old lady in the house as the wallpaper; the only Plok! visual depicted was its box art, placed in the bottom right and very small-sized. However, other print advertisements were much more true to the game, such as Tradewest's series of comic book panel ads named The Adventures of Plok with Plok's origins at the storyline. Plok! was first announced in June 1993; upon its announcement, a N-Force journalist called it one of the most "surreal" games in the September 1993 line-up. Tradewest presented Plok! at the summer 1993 Consumer Electronics Show; a GameFan writer called Tradewest "one of the [show's] most impressive third party line-up," highlighting Plok! as a "a colorful new action title with a lead character that hurls his arms and legs at attackers." Reception Contemporaneous reviews Upon release, critics declared Plok! the best Nintendo release of 1993 as well as another classic by the company, the best platform game of the year, and one of the all-time best in the genre. However, Plok! was compared to many other games of its kind, and even the most favorable reviewers expressed skepticism about playing another average colorful cutesy platform game. Some critics ultimately thought it was and criticized its lack of depth. A frequent comparison was with Sonic the Hedgehog (1991), particularly in its "sugary" tone and Plok's speed ball attack; GameStars FI derogatorily labeled the titular character as "a puny version" of the Sega franchise's blue hedgehog, criticizing his weird design and calling it names such as "hot pink reddy coloured duck penguin thing" and "freaky pork chop." However, other reviewers opined Plok as distinguished in the genre, noting innovative aspects such as the costume and weapon power-ups and limb mechanics. Reviews also enjoyed its humor, such as with Plok's limbless movements, his power-ups, and nonsensical story; Humphreys felt it would help the game be enjoyed by even the biggest detractors of kid-friendly platformers. The hero was applauded for being charming, more lovable than other mascots of its kind, and versatile, particularly with his limbs and his costume and weapon power-ups. Trenton Webb found the limb mechanic "quite fun to mess around with uselessly," and an Electronic Gaming Monthly critic called it "catchy" with "plenty of situations to test it." Superjuegos writer The Elf called his vehicle power-ups "fantastic and fun," and Hyper magazine Jason Humphreys claimed the cowboy power-up his favourite. Some critics were addicted to Plok!s gameplay and eulogized its playability, perfect control, and variety. Super Plays Jonathan Davies felt it kept adding new concepts as the game went on, and praised some of them, such as patterns of item placements directing the player to off-camera platforms and bouncing off water instead of drowning to death. Plok!s difficulty was also highlighted, mostly attributed to lack of a password system, save feature, and requirement for continues to be earned. Other contributors to the challenge included long stage lengths and sudden obstacles such as rolling logs. One Total! journalist reported being confused what action to do next in some stages, and another from HobbyConsolas disclosed getting lost in levels and unable to look for targets and fleas requried to complete levels. While some reviewers approved how hard Plok was, others disliked not being able to save or use passwords, reasoning it was annoying to re-play earlier stages over and over again. The cartoony graphics were praised as colorful, surreal, adorable, and featuring "vivid backgrounds"; one reviewer called them some of the best on the Super Nintendo, while another compared it to Equinox. FI appreciated the detail, such as Plok's breathing animation when standing still and the many "little coloured flowers everywhere, looking like real flower power stuff." The music was highlighted (by one critic as the game's best aspect) and noted for pushing the hardware limitations, and sound effects were praised. Commercial performance and plans for a franchise John Pickford and Kay had faith in Plok! being commercially successful. The Pickford brothers garnered IP ownership for the Plok character, common in other media economies but rare in the video games industry; and planned a franchise out of it, such as sequels, ports, and merchandise. A Mega Drive version was planned using a Software Creations worker's software for automatically converting SNES titles into Mega Drive games; 80% of the code was automatically converted, according to Ste, with the other 20% requiring hands-on work. Ultimately, however, despite Webb announcing the port's completion in an April 1994 interview, it was never released for unknown reasons. Ste, using magic markers, created artwork for future sequels and merchandise, such as style guides of the characters and illustrations of scenes. The Pickfords planned sequels to involve Plok searching for the fleas' home, introduce a setting named Tower Island, and make Plok's comfy chair (which he sleeps on in the end credits of Plok!) a more crucial part of the gameplay. The Pickfords also drew concepts for toys used for pitches of the SNES game, such as a doll where limbs were joined and detached with velcro. During marketing, a Plok model was created in 3D Studio 4 for a promotional photo of Software Creations. However, 1992, late in Plok!s development, saw the beginning of a saturation of colorful platform games starring cute mascots, like Bubsy and Zool. Ste, in 2004, publicly stated developers and producers with more finances knew about previews of the Fleapit coin-op, which may have influenced them to produce games like it; there are no other reports verifying this. Although selling decently over a long period, none of the revenue went to the Pickfords, and Ste suggested the saturation hindered its commercial performance significantly; this plus Software Creations constantly changing publishers between projects rendered a Plok! franchise impossible for several years. Legacy In 2009, North American company Super Fighter Team released Zaku, a horizontal shooter for the Atari Lynx which features a special guest appearance by Plok. The Pickfords later launched a Plok webcomic. From there, the webcomic takes place 20 years after the game as it features new characters and some returning ones like Rockyfella. The comic sometimes used pop-culture references from other games and other media, in-joke reference and commentaries on the game's development and its future. In later years, Plok was ranked the 26th all-time best SNES title (which also named it the best platformer produced in the United Kingdom) in 1996, the 19th best platform game by Gamereactor in 2014, and was listed in Edge editor Tony Mott's 1001 Video Games You Must Play Before You Die in its 2013 edition. Video game elements introduced in Plok! were noted by retrospective journalists to be in more popular platformers released in later years. Games starring characters with floating body parts used as weapons, such as Dynamite Headdy (1994) and Rayman (1995), were released shortly afterwards and garnered much wider exposure. When listing Plok! on a list of "The Most Unappreciated Platformers Of The '90s" for Kotaku, Ben Bertoli wrote it was "influential for portraying a world, named Akrill[ic], based on craft supplies and cloth, a concept that Nintendo itself has come to rely on for many games. The game also provided players with power-ups and vehicles, such as flamethrowers and jetpacks, the likes of which had not been seen in a single game." Sepia-toned flashback stages also made up a chunk of Mickey Mania (1994). Notes References Citations Works cited </ref> External links The Plok Archive, part of the Pickford Bros.'s official site Plok at Gamespot.com Archived version of "The Only Plok! Site" 1993 video games Activision games Cancelled Sega Genesis games Platform games Software Creations games Super Nintendo Entertainment System games Super Nintendo Entertainment System-only games Tradewest games Video games about insects Video games developed in the United Kingdom Video games scored by Tim Follin
48368376
https://en.wikipedia.org/wiki/Behdad%20Esfahbod
Behdad Esfahbod
Seyed Behdad Esfahbod MirHosseinZadeh Sarabi (; born September 27, 1982) is an Iranian-Canadian software engineer and free software developer. He was a software engineer at Facebook from February 2019 until July 1st, 2020; before that he was a Senior Staff Software Engineer at Google since 2010, and before that at Red Hat. Education Esfahbod holds an MBA from the University of Toronto, Rotman School of Management and a Master of Science degree from the University of Toronto in Computer Science, and a Bachelor of Science degree from Sharif University in Computer Engineering, Software. While at high school Esfahbod won a silver in the 1999 International Olympiad in Informatics and then gold in 2000. Notable projects Esfahbod was among the founders of Sharif FarsiWeb Inc. which carried out internationalization and standardization projects related to open source and Persian language. He was a director at GNOME Foundation from 2007 to 2010, serving as the president from 2008 to 2009. Esfahbod is an expert on font engineering and internationalization, a frequent speaker at workshops and conferences. He has contributed to many open-source projects. Among the projects, he has led are the Cairo, fontconfig, HarfBuzz, and Pango libraries, which are standard parts of the GNOME desktop environment, the Google Chrome web browser, and the LibreOffice suite of programs. He received an O'Reilly Open Source Award in 2013 for his work on HarfBuzz. Detention Iran visit Esfahbod was arrested by the Islamic Revolutionary Guards Corps intelligence echelon during a 2020 visit to Tehran. He was then moved to Evin prison, where he was psychologically pressured and interrogated in solitary confinement for 7 days. They downloaded all his private data from his devices. Iranian security forces let him go based on his promise to spy on his friends once he was back in the United States. References External links Behdad Esfahbod's personal homepage Behdad Esfahbod's Twitter 1982 births Living people Open source people Free software programmers GNOME developers People involved with Unicode Canadian engineers Iranian engineers Canadian people of Iranian descent People from Sari, Iran University of Toronto alumni Sharif University of Technology alumni Facebook employees Google employees Red Hat employees Iranian software developers
45076094
https://en.wikipedia.org/wiki/BOSH%20%28software%29
BOSH (software)
BOSH is an open-source software project that offers a toolchain for release engineering, software deployment and application lifecycle management of large-scale distributed services. The toolchain is made up of a server (the BOSH Director) and a command line tool. BOSH is typically used to package, deploy and manage cloud software. While BOSH was initially developed by VMware in 2010 to deploy Cloud Foundry PaaS, it can be used to deploy other software (such as Hadoop, RabbitMQ, or MySQL for instance). BOSH is designed to manage the whole lifecycle of large distributed systems. Since March 2016, BOSH can manage deployments on both Microsoft Windows and Linux servers. A BOSH Director communicates with a single Infrastructure as a service (IaaS) provider to manage the underlying networking and virtual machines (VMs) (or containers). Several IaaS providers are supported: Amazon Web Services EC2, Apache CloudStack, Google Compute Engine, Microsoft Azure, OpenStack, and VMware vSphere. To help support more underlying IaaS providers, BOSH uses the concept of a Cloud Provider Interface (CPI). There is an implementation of the CPI for each of the IaaS providers listed above. Typically the CPI is used to deploy VMs, but it can be used to deploy containers as well. Few CPIs exist for deploying containers with BOSH and only one is actively supported. For this one, BOSH uses a CPI that deploys Pivotal Software's Garden containers (Garden is very similar to Docker) on a single virtual machine, run by VirtualBox or VMware Workstation. In theory, any other container engine could be supported, if the necessary CPIs were developed. Due to BOSH indifferently supporting deployments on VMs or containers, BOSH uses the generic term “instances” to designate those. It is up to the CPI to choose whether a BOSH “instance” is actually a VM or a container. Workflow Once installed, a BOSH server accepts uploading root filesystems (called “stemcells”) and packages (called “releases”) to it. When a BOSH server has the necessary bits for deploying a given software system, it can be told to proceed, as described by a YAML deployment manifest. BOSH then progressively deploys “instances” (VMs or containers), using canaries to avoid deploying failing configurations. Once a software system is deployed, BOSH monitors its instances continuously to allow detecting failing instances, and resurrecting any missing one. When a BOSH deployment manifest is changed, BOSH accepts to roll out the implied modifications proceeding progressively, instance by instance. This means that BOSH can upgrade live clusters with possibly no downtime. Concepts Release A BOSH release can either be an archive file or a git repository. In both cases, it describes a software system that can be deployed with BOSH. For this purpose, it packages up all related binary assets, source code, compilation scripts, configurable properties, startup scripts and templates for configuration files. BOSH releases are made of “packages” and “jobs”. Roughly, BOSH packages provide something that can be run, and BOSH jobs describe how these things are configured and run. A BOSH package details the necessary source code, binary assets (called “blobs”), and compilation scripts for building a given software component. There are two ways to provide binary “blobs”. In a BOSH release that is provided as an archive file, blobs are directly included. But with BOSH releases that are provided as git repositories, doing the same tends to be problematic when blobs get big. That's why a BOSH release provides a concept of “blobstore”, from where referenced blobs can be fetched. Most BOSH releases use blobstores that are backed by public Amazon S3 buckets, but there are other ways to refer to a private or a local “blobstore” in a BOSH release. BOSH packages are always subject to a compilation phase, even if this just extracts files from an archive and copies them to the proper target directory. To compile a given package, BOSH spawns an ephemeral compilation instance (VM or container) that only includes any required packages and blobs, as declared by the package specification. In this dedicated instance, BOSH runs the compilation script, and seals the compilation result in its database, so that it can be safely used for reproducible deployments. BOSH jobs on the other hand, provide configuration properties (that can possibly be documented), templates for configuration files, and startup scripts. BOSH jobs refer to one or many packages as dependencies. Jobs are also sealed into BOSH database, but the templates for configuration files are rendered at deploy time, where all configuration properties are resolved. These configuration properties are usually IP addresses, port numbers, user names, passwords, domain names, etc. Stemcell A BOSH stemcell packages the basics for creating a new instance (VM or container). Namely, a BOSH stemcell ships an Operating System image along with a BOSH agent and a copy of monit, which is used to manage the services (called “jobs”) that will be hosted by the instance. The BOSH agent helps BOSH communicate with the instance during all its life cycle. The stemcell concept in BOSH is similar to Virtual Machine Images like Amazon's AMIs, but BOSH stemcells are not meant to be specialized for any particular usage. Instead, BOSH only provides different stemcells for supporting different Operating Systems (CentOS, Ubuntu or Windows), or different underlying IaaS providers (AWS or OpenStack). The name “stemcell” originated from biological term “stem cells”, which refers to the undifferentiated cells that are able to grow into diverse cell types later. Similarly, instances created by a BOSH stemcell are identical at the beginning. After inception, instances are configured with different CPU/memory/storage/network, and installed with different software packages. Hence, instances built from the same BOSH stemcell can behave differently. BOSH Agent The BOSH agent is a service that runs on every BOSH-deployed VM. It does the following: sets up the VM, e.g., configures local disks, configure and format attached (secondary) disks, configures networks accepts requests from director, e.g., pings, job management requests manages jobs: starting, stopping, and monitoring health Deployment A BOSH deployment is basically a YAML deployment manifest, where the user describes the BOSH releases and BOSH stemcells to use, and how to set up and compose jobs into groups of identical instances (historically misnamed “jobs” and later renamed “instance groups”). Within these “instance groups”, BOSH can span identical instances (VMs or containers) across different availability zones, in order to minimise the risk for all instances to go down at the same time. This is particularly useful when deploying highly available databases or applications. In most cases, users don't work with deployment manifest as one big YAML file. Instead, deployment manifest are split into smaller files that are easier to maintain. These separate files are merged by tools like spiff or spruce, right before they get uploaded to the BOSH server and deployed. In a deployment manifest, all configuration properties, as declared by jobs from all referenced releases, can be customized. Different jobs can refer to configuration properties with same name, in order to share common settings. Key principles BOSH was purposefully constructed to address the four principles of modern release engineering in the following ways: Identifiability Being able to identify all of the source, tools, environment, and other components that make up a particular release. In its concept of “release”, BOSH packages up all related source code, binary assets, configurable properties, compilation scripts, and startup scripts. This allows users to easily track what is actually deployed, and how it is run. Additionally, BOSH provides a way to capture the root filesystems that will be the basis of deployed instances (VMs or containers), as single images called “stemcells”. BOSH releases and BOSH stemcells are identified by UUIDs and sealed by SHA-1 checksums. Reproducibility The ability to integrate source, third party components, data, and deployment externals of a software system in order to guarantee operational stability. BOSH tool chain provides a centralized server for operating the deployed systems. This server holds software “releases”, Operating System images (called “stemcells”), persistent data, and system configuration. Therefore, a given deployment is guaranteed to reproduce an identical result. Consistency The mission to provide a stable framework for development, deployment, audit, and accountability for software components. BOSH achieves such consistency with its software “releases”, that bring a consistent framework for developing and deploying the software systems. Moreover, audit and accountability are provided by the BOSH server, which allows users to see and track changes made to the deployed systems. Agility The ongoing research into what are the repercussions of modern software engineering practices on the productivity in the software cycle, i.e. Continuous Integration. BOSH tool chain integrates well with current best practices of software engineering (including Continuous Delivery) by providing ways to easily create software releases in an automated way and to update complex deployed systems with simple commands. History Designed to address shortcomings found in available tools to manage Cloud Foundry. Chef was used originally, but was limited in its ability to package, spin up/down servers, limited in monitoring and self-management capabilities. Originally developed for Cloud Foundry’s own needs, but the project has now grown to be completely generic, and can be used for orchestration of other software such as Hadoop, RabbitMQ, MySQL and similar platform or application software. Architecture A BOSH installation is made of several separate components that can possibly be split across different VMs or containers: A Director that is the “brain” of the server The director database, made of a PostgreSQL instance, a Redis instance and a Blobstore for storing compiled packages and jobs A Health Monitor that keeps track of instances (VMs or containers) status Many BOSH agents, one on each deployed instance A NATS message bus for connecting the Director, the Health Monitor, and all the deployed BOSH agents A CPI (Cloud Provider Interface), which is just an executable binary conforming to some specific API A BOSH managed environment usually centers around the Director deployed on a VM. Cloud / Platform / OS compatibility BOSH connects to the underlying IaaS layer through an abstraction called the CPI (Cloud Provider Interface). There are CPIs available for Amazon Web Services, certain OpenStack versions, vSphere, vCloud. Some community maintained CPIs exist for Google Compute Engine, Microsoft Azure and CloudStack. Deployment BOSH can be deployed as a BOSH release, which may create a “chicken or egg” surprise for newcomers. A BOSH server is not the only software that can deploy BOSH releases. There is a BOSH provisioner project that can deploy BOSH in a VM, a Docker container, or a bare metal server. This component is used by the BOSH packer provisioner, which creates a Vagrant box running BOSH-lite, which is what most users rely on when learning BOSH. Governance Once a sub-component of Cloud Foundry, BOSH is now a separate open source project, that aims at deploying any distributed software. BOSH is managed by the Cloud Foundry Foundation. Nearly all contributions to BOSH are made by Pivotal. Users Pivotal uses BOSH to orchestrate Cloud Foundry within Pivotal Cloud Foundry (PCF), as well as all of the Pivotal Data Services for Cloud Foundry. Announced public users of BOSH and PCF include Axel Springer, Corelogic, IBM, Monsanto, Philips, SAP, and Swisscom. Distributions BOSH is not commercially distributed as a standalone product. It is included as part of Pivotal Cloud Foundry, IBM Bluemix, and HP Helion Developer Platform, and is also used and supported commercially by Cloud Credo, Stark & Wayne, Gstack, and others. References External links Web services Web hosting File hosting Network file systems Cloud storage Cloud computing providers Cloud platforms Open-source cloud hosting services Free software for cloud computing Free software programmed in Ruby VMware
37651129
https://en.wikipedia.org/wiki/Softengi
Softengi
Softengi is a Ukrainian IT outsourcing service provider. It was established in 2009 as a spin-off of the Softline software development company. Softline was established by graduates of the Kyiv Polytechnic Institute in 1995. Softengi's main competencies are outsourcing software development, outsourcing of IT business processes, mobile application development, 3D modeling, and consulting. The company's software engineers are certified by Microsoft and Oracle. The company's project managers are B and C level certified by The International Project Management Association. Softengi has offices in Kyiv, Kharkiv, and Zhytomyr, and representative offices in Tbilisi, Georgia and California. Softengi belongs to the Intecracy Group, an international IT-consortium with more than 600 IT specialists and is a member of Hi-Tech Initiative, the European Business Association, USUBC, and IT Ukraine associations. The company is a Microsoft Gold partner. History 1995, Softline founded 1997, first major project for a United States customer 1999, established a distributed team of developers to implement Java projects for US customers 2001, created software development center based on .Net technology for a US customer 2002, 3-year project on legacy systems geoengineering and its transfer to a new technology platform for European customers 2003, certificate of compliance with ISO 9001:2000 international quality standard 2004, active work in the US market started with the cooperation of Volia Software 2005, certification for compliance with Capability Maturity Model CMM level 3 2006, 3D modeling studio. 2007, international projects organized by the European Commission and the United Nations. The company was certified for compliance with the CMMI Level 4 2008, projects for the Ukrainian government and in Georgia, Kazakhstan and Russia 2009, Softengi spun off. Softengi joined the Intecracy Group IT consortium. 2010, Softengi established new software development department for iPhone and iPad 2011, Softengi received ISO 9001:2008 certificate. Projects Software development Softengi develops projects using .Net, Java, the web, 3D and mobile technologies (Android, iOS). The company developed a management system for insurance companies in Switzerland, covering insurance company internal processes, performing calculations and generating documents. The company improved the functionality of the internal credits system for the Bank of Georgia transferring it to Microsoft's SharePoint 2010 platform. The major feature of the project was the absence of the Georgian language pack for SharePoint. Softengi localized the system with a Georgian user interface. Development centers Softengi's biggest development center is for Enviance, Inc., based in California, USA. Softengi developed software applications based on a SaaS model. In addition to this project, traditional client‐server applications were reengineered and deployed to the SaaS environment. More than a million engineer-hours were dedicated to the project. More than 40 employees worked on the project. Mobile applications The Accelerometer application for iPhone counts steps with distance, speed and calculates the number of calories burned. The iCSound application for iPad gives the user an opportunity to create music, "seeing" the sound from various musical instruments and pre-composed loops mixed together. iCSound allows to mix up to 40 music loops with unlimited audio effects. The Country/City guide application for Android mobile devices employs GPS and Google Maps indicating nearby points of interest (POI). The integration with Layare (augmented reality) displayed points of interest, bus stations, restaurants, etc., and let users add their own POIs. 3D and stereo interactive complex The company's 3D studio was created in 2004. By 2012 it had fulfilled 60 projects. The company developed interactive 3D models of the Bombardier CH850 interior in partnership with Eon Reality for Bombardier. The project was implemented using EON Studio in combination with EON I-Catcher technology used for modeling. The project was used by Bombardier at the EBACE convention in Geneva, May 2006. Interactive architectural visualization Softengi's 3D studio developed Unity-based solutions for architectural visualization, allowing construction companies and architectural firms to reduce by 40% the cost of 3D presentation development. Augmented reality Softengi implemented augmented reality projects with Kinect. An augmented reality module using Kinect allowed a user to interact without contact with the image through body postures, capturing movement at special points on the body, and thus carrying out a complete 3-dimensional recognition of body movements. References Outsourcing companies Software companies of Ukraine
19278548
https://en.wikipedia.org/wiki/Atlas%20%28computer%29
Atlas (computer)
The Atlas Computer was one of the world's first supercomputers, in use from 1962 until 1971. It was considered to be the most powerful computer in the world at that time. Atlas' capacity promoted the saying that when it went offline, half of the United Kingdom's computer capacity was lost. It is notable for being the first machine with virtual memory (at that time referred to as 'one-level store') using paging techniques; this approach quickly spread, and is now ubiquitous. Atlas was a second-generation computer, using discrete germanium transistors. Atlas was created in a joint development effort among the University of Manchester, Ferranti International plc and the Plessey Co., plc. Two other Atlas machines were built: one for British Petroleum and the University of London, and one for the Atlas Computer Laboratory at Chilton near Oxford. A derivative system was built by Ferranti for Cambridge University. Called the Titan, or Atlas 2, it had a different memory organisation and ran a time-sharing operating system developed by Cambridge University Computer Laboratory. Two further Atlas 2s were delivered: one to the CAD Centre in Cambridge (later called CADCentre, then AVEVA), and the other to the Atomic Weapons Research Establishment (AWRE), Aldermaston. The University of Manchester's Atlas was decommissioned in 1971. The final Atlas, the CADCentre machine, was switched off in late 1976. Parts of the Chilton Atlas are preserved by National Museums Scotland in Edinburgh; the main console itself was rediscovered in July 2014 and is at Rutherford Appleton Laboratory in Chilton, near Oxford. History Background Through 1956 there was a growing awareness that the UK was falling behind the US in computer development. In April, B.W. Pollard of Ferranti told a computer conference that "there is in this country a range of medium-speed computers, and the only two machines which are really fast are the Cambridge EDSAC 2 and the Manchester Mark 2, although both are still very slow compared with the fastest American machines." This was followed by similar concerns expressed in May report to the Department of Scientific and Industrial Research Advisory Committee on High Speed Calculating Machines, better known as the Brunt Committee. Through this period, Tom Kilburn's team at Manchester University had been experimenting with transistor-based systems, building two small machines to test various techniques. This was clearly the way forward, and in the fall of 1956, Kilburn began canvassing possible customers on what features they would want in a new transistor-based machine. Most commercial customers pointed out the need to support a wide variety of peripheral devices, while the Atomic Energy Authority suggested a machine able to perform an instruction every microsecond, or as it would be known today, 1 MIPS of performance. This later request led to the name of the prospective design, MUSE, for microsecond engine. The need to support many peripherals and the need to run fast are naturally at odds. A program that processes data from a card reader, for instance, will spend the vast majority of its time waiting for the reader to send in the next bit of data. To support these devices while still making efficient use of the central processing unit (CPU), the new system would need to have additional memory to buffer data and have an operating system that could coordinate the flow of data around the system. Muse becomes Atlas When the Brunt Committee heard of new and much faster US designs, the Univac LARC and IBM STRETCH, they were able to gain the attention of the National Research Development Corporation (NRDC), responsible for moving technologies from war-era research groups into the market. Over the next eighteen months, they held numerous meetings with prospective customers, engineering teams at Ferranti and EMI, and design teams at Manchester and the Royal Radar Establishment. In spite of all this effort, by the summer of 1958, there was still no funding available from the NRDC. Kilburn decided to move things along by building a smaller Muse to experiment with various concepts. This was paid for using funding from the Mark 1 Computer Earnings Fund, which collected funds by renting out time on the University's Mark 1. Soon after the project started, in October 1958, Ferranti decided to become involved. In May 1959 they received a grant of £300,000 from the NRDC to build the system, which would be returned from the proceeds of sales. At some point during this process, the machine was renamed Atlas. The detailed design was completed by the end of 1959, and the construction of the compilers was proceeding. However, the Supervisor operating system was already well behind. This led to David Howarth, newly hired at Ferranti, expanding the operating system team from two to six programmers. In what is described as a herculean effort, led by the tireless and energetic Howarth, the team eventually delivered a Supervisor consisting of 35,000 lines of assembler language which had support for multiprogramming to solve the problem of peripheral handling. Atlas installations The first Atlas was built up at the university throughout 1962. The schedule was further constrained by the planned shutdown of the Ferranti Mercury machine at the end of December. Atlas met this goal, and was officially commissioned on 7 December by John Cockcroft, director of the AEA. This system had only an early version of Supervisor, and the only compiler was for Autocode. It was not until January 1964 that the final version of Supervisor was installed, along with compilers for ALGOL 60 and Fortran. By the mid-1960s the original machine was in continual use, based on a 20-hour-per-day schedule, during which time as many as 1,000 programs might be run. Time was split between the University and Ferranti, the latter of which charged £500 an hour to its customers. A portion of this was returned to the University Computer Earnings Fund. In 1969, it was estimated that the computer time received by the University would cost £720,000 if it had been leased on the open market. The machine was shut down on 30 November 1971. Ferranti sold two other Atlas installations, one to a joint consortium of London University and British Petroleum in 1963, and another to the Atomic Energy Research Establishment (Harwell) in December 1964. The AEA machine was later moved to the Atlas Computer Laboratory at Chilton, a few yards outside the boundary fence of Harwell, which placed it on civilian lands and thus much easier to access. This installation grew to be the largest Atlas, containing 48 kWords of 48-bit core memory and 32 tape drives. Time was made available to all UK universities. It was shut down in March 1974. Titan and Atlas 2 In February 1962, Ferranti gave some parts of an Atlas machine to Cambridge University, and in return, the University would use these to develop a cheaper version of the system. The result was the Titan machine, which became operational in the summer of 1963. Ferranti sold two more of this design under the name Atlas 2, one to the Atomic Weapons Research Establishment (Aldermaston) in 1963, and another to the government-sponsored Computer Aided Design Center in 1966. Legacy Atlas had been designed as a response to the US LARC and STRETCH programs. Both ultimately beat Atlas into official use, LARC in 1961, and STRETCH a few months before Atlas. Atlas was much faster than LARC, about four times, and ran slightly slower than STRETCH - Atlas added two floating-point numbers in about 1.59 microseconds, while STRETCH did the same in 1.38 to 1.5 microseconds. No further sales of LARC were attempted, and it is not clear how many STRETCH machines were ultimately produced. It was not until 1964's arrival of the CDC 6600 that the Atlas was significantly bested. CDC later stated that it was a 1959 description of Muse that gave CDC ideas that significantly accelerated the development of the 6600 and allowed it to be delivered earlier than originally estimated. This led to it winning a contract for the CSIRO in Australia, which had originally been in discussions to buy an Atlas. Ferranti was having serious financial difficulties in the early 1960s, and decided to sell the computer division to International Computers and Tabulators (ICT) in 1963. ICT decided to focus on the mid-range market with their ICT 1900 series, a flexible range of machines based on the Canadian Ferranti-Packard 6000. Technical description Hardware The machine had many innovative features, but the key operating parameters were as follows (the store size relates to the Manchester installation; the others were larger): 48-bit word size. A word could hold one floating-point number, one instruction, two 24-bit addresses or signed integers, or eight 6-bit characters. A fast adder that used novel circuitry to minimise carry propagation time. 24-bit (2 million words, 16 million characters) address space that embraced supervisor ('sacred') store, V-store, fixed store and the user store 16K words of core store (equivalent to 96 KB), featuring interleaving of odd/even addresses 8K words of read-only memory (referred to as the fixed store). This contained the supervisor and extracode routines. 96K words of drum store (eqv. to 576 KB), split across four drums but integrated with the core store using virtual memory. The page size was 512 words, i.e 3072 bytes. 128 high-speed index registers (B-lines) that could be used for address modification in the mostly double-modified instructions. The register address space also included special registers such as the extracode operand address and the exponent of the floating-point accumulator. Three of the 128 registers were program counter registers: 125 was supervisor (interrupt) control, 126 was extracode control, and 127 was user control. Register 0 always held value 0. Capability for the addition of (for the time) sophisticated new peripherals such as magnetic tape, including direct memory access (DMA) facilities Peripheral control through V-store addresses (memory-mapped I/O), interrupts and extracode routines, by reading and writing special wired-in store addresses. An associative memory (content-addressable memory) of page address registers to determine whether the desired virtual memory location was in core store Instruction pipelining Atlas did not use a synchronous clocking mechanism—it was an asynchronous Processor—so performance measurements were not easy but as an example: Fixed-point register add – 1.59 microseconds Floating-point add, no modification – 1.61 microseconds Floating-point add, double modify – 2.61 microseconds Floating-point multiply, double modify – 4.97 microseconds Extracode One feature of the Atlas was "Extracode", a technique that allowed complex instructions to be implemented in software. Dedicated hardware expedited entry to and return from the extracode routine and operand access; also, the code of the extracode routines was stored in ROM, which could be accessed faster than the core store. The uppermost ten bits of a 48-bit Atlas machine instruction were the operation code. If the most significant bit was set to zero, this was an ordinary machine instruction executed directly by the hardware. If the uppermost bit was set to one, this was an Extracode and was implemented as a special kind of subroutine jump to a location in the fixed store (ROM), its address being determined by the other nine bits. About 250 extracodes were implemented, of the 512 possible. Extracodes were what would be called software interrupt or trap today. They were used to call mathematical procedures which would have been too inefficient to implement in hardware, for example sine, logarithm, and square root. But about half of the codes were designated as Supervisor functions, which invoked operating system procedures. Typical examples would be "Print the specified character on the specified stream" or "Read a block of 512 words from logical tape N". Extracodes were the only means by which a program could communicate with the Supervisor. Other UK machines of the era, such as the Ferranti Orion, had similar mechanisms for calling on the services of their operating systems. Software Atlas pioneered many software concepts still in common use today, including the Atlas Supervisor, "considered by many to be the first recognisable modern operating system". One of the first high-level languages available on Atlas was named Atlas Autocode, which was contemporary to Algol 60 and created specifically to address what Tony Brooker perceived to be some defects in Algol 60. The Atlas did however support Algol 60, as well as Fortran and COBOL, and ABL (Atlas Basic Language, a symbolic input language close to machine language). Being a university computer it was patronised by a large number of the student population, who had access to a protected machine code development environment. Several of the compilers were written using the Compiler Compiler, considered to be the first of its type. It also had a programming language called SPG = System Program Generator. At run time an SPG program could compile more program for itself. It could define and use macros. Its variables were in <angle brackets> and it had a text parser, giving SPG program text a resemblance to Backus–Naur form. Hardware/software integration From the outset, Atlas was conceived as a supercomputer that would include a comprehensive operating system. The hardware included specific features that facilitated the work of the operating system. For example, the extracode routines and the interrupt routines each had dedicated storage, registers and program counters; a context switch from user mode to extracode mode or executive mode, or from extracode mode to executive mode, was therefore very fast. See also Manchester computers Notes References Citations Bibliography Further reading Parallel addition in digital computers: A new fast 'carry' circuit, T. Kilburn, D.B.G. Edwards, D. Aspinall, Proc. IEE Part B September 1959 The Central Control Unit of the "Atlas" Computer, F. H. Sumner, G. Haley, E. C. Y. Chen, Information Processing 1962, Proc. IFIP Congress '62 One-Level Storage System, T. Kilburn, D. B. G. Edwards, M. J. Lanigan, F. H. Sumner, IRE Trans. Electronic Computers April 1962 Accessed 2011-10-13 The Atlas Supervisor, T. Kilburn, R .B. Payne, D .J. Howarth, reprinted from Computers—Key to Total Systems Control, Macmillan 1962 The Atlas Scheduling System, D. J. Howarth, P. D. Jones, M. T. Wyld, Comp. J. October 1962 The First Computers: History and Architectures, edited by Raúl Rojas and Ulf Hashagen, 2000, MIT Press, A History of Computing Technology, M. R. Williams, IEEE Computer Society Press, 1997, External links The Atlas Autocode Reference Manual The Atlas Supervisor paper (T Kilburn, R B Payne, D J Howarth, 1962) http://bitsavers.informatik.uni-stuttgart.de/pdf/ict_icl/atlas/ (Several reference documents) Ferranti Atlas 1 & 2: List of References Early British computers Atlas Transistorized computers Computer-related introductions in 1962 48-bit computers Collections of the National Museums of Scotland History of Manchester Department of Computer Science, University of Manchester
2040892
https://en.wikipedia.org/wiki/List%20of%20high%20schools%20in%20South%20Dakota
List of high schools in South Dakota
This is a list of high schools in the state of South Dakota. Current schools Closed schools This is an incomplete list of former schools, nicknames, years closed, and additional info. Agar High School "Hi-Pointers" (pre-1984)/"Chargers" (1993-20??), Agar (Closed 1984, reopened 1993, closed 20??, now part of Sully Buttes High School) Akaska High School "Raiders" Alpena High School "Wildcats", Alpena Amherst High School "Wildcats" Andover High School "Gorillas" Ardmore High School "Rattlers" Argonne High School "Arrows" Artesian-Letcher High School "Rams", Artesian (Closed, now part of Sanborn Central High School) Ashton High School "Cardinals" Astoria High School "Comets" Athol High School "Arrows" Augustana Academy "Knights", Canton Bancroft Eagles Barnard Bears Bath Warriors Belvidere Comets Big Stone City Lions Blunt Monarchs Bonesteel Tigers Bonilla Eagles Bradley Bombers Brandt Bulldogs Brentford Braves Bridgewater Wildcats Bristol High School "Pirates", Bristol (Closed in 2004) Bruce Bees Bryant Scotties Buffalo Gap Buffaloes Burbank Owls Burke Bulldogs Canning Coyotes Canova Eagles Carthage Eagles Cathedral High School "Gaels", Rapid City Cathedral High School "Irish", Sioux Falls Cavour Cougars Chancellor Wildcats Claire City Comets Claremont Honkers Clark Comets Colman Wildcats Colton Panthers Conde High School "Spartans", Conde (closed, now part of Doland High, Groton High, Northwestern High) Corona Midgets Corsica Comets Cottonwood Coyotes Columbia Comets Cresbard High School "Comets", Cresbard (Closed, now part of Faulkton Area School) Dallas Coyotes Dante Gophers Davis Bulldogs Deadwood Bears Delmont Wildcats Draper Bulldogs Eagle Butte Warriors Eastern High School "Yellowjackets", Madison Egan Bluejays Elk Point Pointers Emery High School "Eagles", Emery (Consolidated with Bridgewater in 2011, now called Bridgewater-Emery High School) Erwin Arrows Fairfax Broncos Farmer Orioles Fedora Tigers Florence Flyers Forestburg Buccaneers Frankfort Falcons Franklin Flyers Fort Pierre Buffaloes Fort Thompson Buffaloes Fulton Pirates Gann Valley Buffaloes Garden City Dragons Gary Tigers Gayville Orioles Geddes Rams General Beadle High School "Bluejays", Madison Glenham High School "Eagles", Glenham, South Dakota (closed in 1984, now part of Selby Area Schools) Goodwin Eagles Harrold High School, Harrold (Closed, now part of Highmore-Harrold Schools) Hartford Pirates Hayti Redbirds Hazel Mustangs Hecla Rockets Henry Owls Hetland Broncos Hitchcock Bluejays Holabird Cardinals Hosmer Tigers (Closed, now part of Edmunds-Central Schools) Hudson Trojans Humboldt Eagles Hurley High School "Bulldogs", Hurley, (Closed, now part of Viborg-Hurley High School) Interior Cubs Irene High School "Cardinals", Irene (Closed, now part of Irene-Wakonda) Isabel High School "Wildcats", Isabel (Closed, now Part of Timber Lake High, Mclntosh High) Java Panthers (Closed, now part of Selby Area Schools) Jefferson Blackhawks Kennebec Canaries Kidder Tigers Lake City Golden Eagles Lake Norden Bluejays Lane Trojans Lebanon Bulldogs Letcher Tigers Lily Wildcats Logan Arrows Loyalton Lions Lyons Lions Midland High School, Midland (Closed, now part of Kadoka Area High School) Martin Warriors Meckling Panthers Melette Terriers Monroe "Wooden Shoed Canaries" New Effington Tigers Nisland Mustangs Northville Panthers Northwestern Lutheran Academy Wildcats Notre Dame High School "Comets", Mitchell Oglala Indians Oldham Dragons Olivet Eagles Onaka Pirates Onida Warriors Orient Hawks Orland Eagles Peever Panthers Pickstown Engineers Piedmont Hawks Pierpont Panthers Plano Panthers Pollock High School "Bulldogs", Pollock (Closed, now part of Mobridge-Pollock High School) Polo Bears Presho Wolves Provo Rattlers Pukwana Wildcats Quinn Quintuplets Ramona Rockets Ravinia Bears Raymond Redwings Ree Heights Warriors Reliance Longhorns Rockham Trojans Roscoe Hornets Roslyn High School "Vikings", Roslyn St. Agatha High School "Agates", Howard St. Lawrence Wolves St. Martens High School "Ravens", Rapid City St. Thomas Shamrocks Salem Cubs Seneca Bluejays Sherman Tigers Sinai Rebels Spencer Cardinals South Dakota School for the Deaf Pheasants South Shore Comets Springfield Trojans Stickney High School "Raiders", Stickney (Closed, now part of Corsica-Stickney High School) Strandburg Tigers Stratford Vikings Tabor Cardinals Thomas Tigers Thompson Buffaloes Thorpe Wolves Trent Warriors Toronto Vikings Tripp Wildcats Tulare Chieftains Turton Frogs Tyndall Panthers Vale Beetdiggers Valley Springs Wolverines Veblen High School "Cardinals", Veblen (Closed, now part of Sisseton High School) Vienna Panthers Virgil Pirates Vivian Bearcats Volin Bluejays Wakonda High School "Warriors", Wakonda (Closed, now part of Irene-Wakonda) Wallace Bulldogs Wasta Flyers Waverly Woodchucks Wentworth Warriors Wessington Warriors Wessington Springs Academy "Hornets", Wessington Springs West Lyman Raiders White Wildcats White Lake Wolverines Winfred Warriors Willow Lake Pirates Witten Wildcats Wolsey Cardinals Wood Bulldogs Worthing Eagles Yale Trojans See also List of school districts in South Dakota List of colleges and universities in South Dakota Notes A Sunshine Bible Academy is located 13 miles south of Miller. Sources Links to school web-sites. South Dakota High School Activities Association South Dakota High schools
4051
https://en.wikipedia.org/wiki/Brian%20Kernighan
Brian Kernighan
Brian Wilson Kernighan (; born 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a Professor of Computer Science at Princeton University since 2000 and is the Director of Undergraduate Studies in the Department of Computer Science. In 2015, he co-authored the book The Go Programming Language. Early life and education Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research Kernighan has held a professorship in the Department of Computer Science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. In 1972, Kernighan described memory management in strings using "hello" and "world", in the programming language B, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in Computer Science. Kernighan's was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats. Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019. Other achievements during his career include: The AMPL programming language The AWK programming language, with Alfred Aho and Peter J. Weinberger, and its book The AWK Programming Language ditroff, or "device independent troff", which allowed troff to be used with any device The Elements of Programming Style, with P. J. Plauger The first documented "Hello, world!" program, in Kernighan's "A Tutorial Introduction to the Language B" (1972) Ratfor Software Tools, a book and set of tools for Ratfor, co-created in part with P. J. Plauger Software Tools in Pascal, a book and set of tools for Pascal, with P. J. Plauger The C Programming Language, with C creator Dennis Ritchie, the first book on C The eqn typesetting language for troff, with Lorinda Cherry The m4 macro processing language, with Dennis Ritchie The pic typesetting language for troff The Practice of Programming, with Rob Pike The Unix Programming Environment, a tutorial book, with Rob Pike "Why Pascal is Not My Favorite Programming Language", a popular criticism of Niklaus Wirth's Pascal. Some parts of the criticism are obsolete due to ISO 7185 (Programming Languages - Pascal); the criticism was written before ISO 7185 was created. (AT&T Computing Science Technical Report #100) Publications The Elements of Programming Style (1974, 1978) with P. J. Plauger Software Tools (1976) with P. J. Plauger The C Programming Language (1978, 1988) with Dennis M. Ritchie Software Tools in Pascal (1981) with P. J. Plauger The Unix Programming Environment (1984) with Rob Pike The AWK Programming Language (1988) with Alfred Aho and Peter J. Weinberger The Practice of Programming (1999) with Rob Pike AMPL: A Modeling Language for Mathematical Programming, 2nd ed. (2003) with Robert Fourer and David Gay D is for Digital: What a well-informed person should know about computers and communications (2011) The Go Programming Language (2015) with Alan Donovan Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security (2017) Millions, Billions, Zillions: Defending Yourself in a World of Too Many Numbers (2018) UNIX: A History and a Memoir (2019) See also List of pioneers in computer science References External links Brian Kernighan's home page at Bell Labs "Why Pascal is Not My Favorite Programming Language" — By Brian Kernighan, AT&T Bell Labs, 2 April 1981 "Leap In and Try Things" — Interview with Brian Kernighan — on "Harmony at Work Blog", October 2009. An Interview with Brian Kernighan — By Mihai Budiu, for PC Report Romania, August 2000 – Interview by Video — TechNetCast At Bell Labs: Dennis Ritchie and Brian Kernighan (1999-05-14) Video (Princeton University, September 7, 2003) — "Assembly for the Class of 2007: 'D is for Digital and Why It Matters'" A Descent into Limbo by Brian Kernighan Photos of Brian Kernighan Video interview with Brian Kernighan for Princeton Startup TV (2012-03-20) The Setup, Brian Kernighan 1942 births Living people Canadian computer scientists Canadian computer programmers Computer programmers Inferno (operating system) people Canadian people of Irish descent Writers from Toronto Plan 9 people Princeton University School of Engineering and Applied Science alumni Princeton University faculty Programming language designers Scientists at Bell Labs Canadian technology writers University of Toronto alumni Unix people C (programming language) Members of the United States National Academy of Engineering Berkman Fellows Scientists from Toronto
1848257
https://en.wikipedia.org/wiki/Charles%20Cecil
Charles Cecil
Charles Cecil (born 11 August 1962) is a British video game designer and co-founder of Revolution Software. His family lived in the Democratic Republic of the Congo when he was still very young, but was evacuated two years after Mobutu Sese Seko's coup d'état. He studied at Bedales School in Hampshire, England. In 1980 he began his studies in Engineering Manufacture and Management at Manchester University, where he met student Richard Turner who invited him to write text adventures for Artic Computing. After completing his degree in 1985 he decided to continue his career in game development and became director of Artic. The following year he established Paragon Programming, a game development company working with British publisher U.S. Gold. In 1987 he moved into publishing as a software development manager for U.S. Gold. A year later he was approached by Activision and was offered the position of manager of their European development studio. In 1990, Cecil founded Revolution along with Tony Warriner, David Sykes and Noirin Carmody. Originally located in Hull, the company moved to York in 1994. Cecil then became Revolution's managing director and focused on writing and design. For the company's first title, Lure of the Temptress (1992), Cecil conceived with others an innovative game engine, called Virtual Theatre, that was designed by Tony Warriner. Cecil's interest in cinematic techniques and technical developments became manifest in Broken Sword: The Shadow of the Templars and the games that followed. Broken Sword 1 was a 2D point-and-click game, but by the end of the nineties Cecil took the company to 3D games with direct control, including Broken Sword: The Sleeping Dragon (2003). In 2004 with no project at hand, he, as head of the company, let everyone go. Nevertheless, he continued to design by implementing the so-called "Hollywood model", in which each time a team is assembled to create a movie. For the fourth Broken Sword game, Broken Sword: The Angel of Death, he decided to work with Sumo Digital. By the end of the decade new developments made it possible to renew the back catalogue of Revolution, and in 2011 Develop ranked Revolution Software among the top 50 most successful development studios in the world. Lure of the Temptress was followed by a string of critically and commercially successful adventure games, including Beneath a Steel Sky, the Broken Sword series, In Cold Blood and Gold and Glory: The Road to El Dorado. Beneath a Steel Sky and the Broken Sword series are often referred to as one of the best adventures of all time, appearing on numerous "top" adventure game lists and receiving several awards and nominations. Sales of Broken Sword 1 and 2 have made over US$100 million and have sold over 3 million copies worldwide. New versions were downloaded by over 4 million people in 2011. Cecil worked on various adventure games outside Revolution, including The Da Vinci Code and Doctor Who: The Adventure Games. Cecil is currently operating as managing director of Revolution. He co-founded Game Republic in 2003 and has been a director on the board. He is a member of the advisory committee for the renewed Game Republic, and has been on the advisory panel of the Edinburgh Interactive Entertainment Festival. He is member of the advisory panel of the Evolve and Develop Conference, a board member of Screen Yorkshire, and a member of Skillset's Computer Games Skills Council. He regularly talks at events and to mainstream press about creative and commercial aspects of the gaming industry. In 2006, he was awarded the status of "Development Legend" by Develop. He was appointed Member of the Order of the British Empire in the 2011 Birthday Honours for services to the video game industry. Biography Early career As a baby, Charles lived in the Democratic Republic of the Congo where his father David was sent by Unilever to reconstruct their accounting systems. When Cecil was two and his mother Veronica was about to give birth to his sister, they were evacuated after Mobutu Sese Seko's coup d'état. His taste for adventure may have started in those days, and the Congo would become a background in one of his games. Cecil was then educated at Bedales School in Hampshire, England. In 1980 he began his studies in mechanical engineering at Manchester University. On a course sponsored by Ford he met student Richard Turner, who invited him to write some text adventure games for his new computer game company, Artic Computing. He decided to take up on the invitation, for like all students, he needed beer money. In those days, game development was the true period of being the auteur of a game, of bedroom coders, of direct contact with the customers, a relation that was lost when big game publishers took over. Cecil's first game became "Adventure B" (aka Inca Curse, published in 1981). It was followed by "Adventure C" (aka Ship of Doom, published in 1982) and "Adventure D" (aka Espionage Island, published in 1982). Each were highly successful on the Sinclair ZX81, ZX Spectrum and Amstrad formats. After completing his degree in 1985, Cecil decided to continue his career in game development and became director of Artic Computing. When Artic closed down, he established Paragon Programming (1986), a game development company working with major British publisher U.S. Gold. In 1987 he left development and moved into publishing as Software Development Manager for U.S. Gold. One year later he was approached by Activision and was offered the position of manager of their European development studio. Noirin Carmody, who would become his wife, was general manager at Activision, where she was responsible for establishing the Sierra name in Europe. Managing director of Revolution Software In 1989, when Cecil was still working at Activison, he decided to set up his own development studio. He contacted Tony Warriner, who had worked with him at Artic Computing and Paragon Programming, and Warriner brought in a fellow programmer, David Sykes. Together with Noirin Carmody, his then-partner and General Manager at Activision UK, they founded Revolution Software (March 1990). The company was originally located in Hull, but moved to York in 1994. Besides becoming Revolution's managing director, Cecil would focus from the start on writing and design. At that time the graphic adventure genre was dominated by LucasArts and Sierra On-Line, and they wanted to create something in between, an adventure game that didn't take itself too seriously, but did have a serious story. For Revolution's first title, Cecil conceived with others an innovative game engine, called Virtual Theatre, and the engine itself was designed by Tony Warriner. The result was Lure of the Temptress (1992), and though it was their first product, it became one of the successful games that would follow. For the second title, Beneath a Steel Sky (1994), often referred to as a cult classic, Cecil contacted comic book artist Dave Gibbons. He had met Gibbons when he was still at Activision, and he admired Gibbons's work on Watchmen. Gibbons became involved in the design of the game, and their collaboration would inspire Cecil's next move. The divergence of and distinction between film and video games is one of Cecil's pet subjects, and his interest in cinematic techniques and technical developments would become manifest in Revolution's upcoming titles. He started to hire external talent from the TV and film trades for the big-budget production Broken Sword: The Shadow of the Templars (1996). Already in the next year the sequel, Broken Sword: The Smoking Mirror, was released. By the end of the nineties, when the adventure market changed, he had to change course as well. Instead of the previous games, that were point-and-click adventures, he chose to move to 3D and direct control with In Cold Blood (2000), a narrative driven adventure game with action elements. At the same time a second title, Gold and Glory: The Road to El Dorado (2000), was developed after DreamWorks's film The Road to El Dorado. As Broken Sword was originally intended to be a trilogy, a third episode was planned. Unlike In Cold Blood, that combined 3D characters with pre-rendered graphics, the third Broken Sword game, Broken Sword: The Sleeping Dragon (2003), became a real-time 3D adventure game, with mild action elements (such as using stealth, climbing, shimmying, and pushing objects). Initially, when he announced that Broken Sword 3 was going to be a 3D game, it caused an outcry by the fans of the series. Cecil had had no choice to adopt 3D though, for when they needed funding in the beginning of 2000, publishers had become obsessed with the idea that everything was going to be 3D. But he had always been keen to move to 3D, as it allowed more special effects and would make the game world more alive. In the same year, he decided to release Beneath a Steel Sky (and Lure of the Temptress) as freeware and the source code was given to ScummVM. The result was that millions of people played the game for free in a very wide range of devices. It would foreshadow Revolution's bright future. He could have said that as a marketing genius he planned it, but as he stated a few years later, that would have been a dreadful lie. However, some hard years were ahead for the company. Over the years it had grown into about 40 people, but the year after Broken Sword: The Sleeping Dragon one of Revolution's projects was cancelled, and he had no other option than let everyone go. In May 2004 Cecil announced that Revolution would go "back to basics," which meant that Revolution, that had set itself up as both designer and producer of video games, would focus more closely on design. As he stated in various presentations, the situation was caused by the fact that big publishing companies had been controlling for years the supply and demand side of the game market, and little was left for independent developers. Though publishers made tens of millions on the games, Revolution was losing money on every title they produced. In the new situation, he implemented the so-called Hollywood model, in which a producer and director come together and assemble a team to create a movie. For the fourth Broken Sword game, Broken Sword: The Angel of Death (2006), he decided to work with Sumo Digital. They took a number of the former Revolution staff and concentrated on production, while Cecil concentrated on design, story and game play. Because Revolution had received a lot of feedback on the decision to abandon point-and-click, the player was allowed to choose between point-and-click and direct control. At the end of the decade things changed by innovations such as broadband, new platforms and digital portals. In the new situation game publishers and other middle-men were no longer needed. Revolution could now start to self-publish and the relation with the audience, a relation that Cecil had always valued, could be restored. In March 2009 Broken Sword: Shadow of the Templars – The Director's Cut was published by Ubisoft for the Wii and DS that included new material. In July 2009 Revolution announced on their website a new division, called Revolution Pocket, together with the first title of the new division, Beneath a Steel Sky – Remastered. In the announcement Cecil stated that the digital revolution had changed the game for developers, and that more titles would follow. He had been contacted by Apple to see if he would consider to bring Revolution's classic titles to the App Store, and Cecil on his turn had contacted Dave Gibbons to work on new editions of Beneath a Steel Sky, Broken Sword: The Shadow of the Templars – The Director's Cut and Broken Sword: The Smoking Mirror – Remastered (2010). The release of the first Broken Sword game was celebrated at the Apple Store in London in February 2010. According to Cecil the digital revolution, and in particular the App Store, saved Revolution. As was announced on Revolution's site in December 2011, the dramatic change enabled Revolution to self-fund their next game. Develop, that ranks development studios based on Metacritic data and chart success, ranked Revolution Software in 2011 among the top 50 most successful development studios in the world. On 23 August 2012 Revolution revealed that they were working on a new Broken Sword game entitled Broken Sword: The Serpent's Curse and they launched a Kickstarter campaign. Though Cecil was approached by a huge publisher to publish a Broken Sword game, Kickstarter was preferred, because they would be able to control development, finances, and marketing. The project was successfully funded within two weeks. Other activities and events Cecil has worked on various games outside Revolution. He was a consultant for The Collective's The Da Vinci Code (2006). Disney approached him to design a game on A Christmas Carol (Disney's A Christmas Carol, Disney Interactive Studios/Sumo Digital, 2009), and he became the voice of the narrator. A decade earlier he had already been executive producer of Disney's Story Studio: Disney's Mulan, a co-production between Kids Revolution and Disney Interactive (NewKidCo, 1999). He also became executive producer of the BBC/Sumo Digital's episodic adventure game Doctor Who: The Adventure Games (2010). The fifth episode (The Gunpowder Plot) won the British Academy Cymru Award 2012. Cecil regularly talks at events and to the press about creative and commercial aspects of the video games industry, and is an ambassador for the Yorkshire and UK games industry in general and of course Revolution in particular. He also teaches, gives masterclasses, acts as judge on game proposals and mentors young game designers. Cecil was a founder of Yorkshire games network Game Republic in 2003, and has been a director on the board. He is a member of the advisory committee for the renewed Game Republic. He has been a member of the steering group and member on the advisory board of Edinburgh Interactive Entertainment Festival, and is on the advisory panel of the Evolve and Develop Conferences. He is also board member of Screen Yorkshire, member of Skillset's Computer Games Skills Council, and member of the BFI Board of Governors. In 2006 Cecil was awarded the status of Development Legend by Develop, Europe's leading development magazine. In 2010 Ed Vaizey, Minister for Culture, Communications and Creative Industries, asked Cecil (together with Ian Livingstone) to be part of an independent review to assess which university courses best prepare graduates with the skills to succeed in the games industry. Cecil was appointed Member of the Order of the British Empire (MBE) in the 2011 Birthday Honours for services to the computer games industry. Revolution's game catalogue Revolution Software quickly established itself as Europe's leading adventure game developer with a string of titles, which have been critically and commercially successful. Clients included Sony Computer Entertainment, Disney, DreamWorks, Virgin Interactive, Sierra Entertainment (Vivendi), Ubisoft, and THQ. Their first two titles, Lure of the Temptress (published in 1992) and Beneath a Steel Sky (published in 1994) went straight to number one in the GALLUP charts in the UK and topped the charts across most of Europe. Revolution's next title, Broken Sword: Shadow of the Templars (published in 1996) and its sequel Broken Sword 2: The Smoking Mirror (published in 1997) both, in turn, received numerous awards such as best adventure game of the year as well as the best adventure game to date. Sales of Broken Sword 1 and 2 have made over US$100 million and have sold over three million copies worldwide. Revolution's next game In Cold Blood, published in 2000 by Sony Computer Entertainment, focused on telling stories through action based gameplay and was met with mixed reviews, though it sold very well. Gold and Glory: The Road to El Dorado, based on the DreamWorks film The Road to El Dorado, was released in late 2000. In 2002 Broken Sword: Shadow of the Templars was also published on the Game Boy Advance and in 2006 the game was also published for the Palm OS and Pocket PC. The third game in the Broken Sword series, Broken Sword: The Sleeping Dragon, was released in November 2003 for PC, PlayStation 2, and Xbox. The game sold the same as the previous Broken Sword games and was nominated for 3 BAFTA awards and Best Writing at the Game Developers Conference in 2004. The fourth game, Broken Sword : The Angel of Death was released on PC in September 2006. In 2009 Broken Sword: Shadow of the Templars – The Director’s Cut was released for the Wii and DS, followed by Beneath a Steel Sky – Remastered for the iPhone. The "Director's Cut" was also released for iOS, Mac, PC (2010), and Android (2012). The game was nominated in the category Story at the British Academy Video Games Awards in 2010. Broken Sword: The Smoking Mirror – Remastered was released in 2010 (iOS, Mac, PC). In 2011 the two Broken Sword games were downloaded by over 4 million people. Personal life Charles Cecil and Noirin Carmody have two children, Ciara and David, who are credited in Broken Sword: The Sleeping Dragon and the new editions of Beneath a Steel Sky and Broken Sword. They all love games, and as his wife works with him as well, family life and work life are completely intertwined. Even their holidays are connected to game design, as they visit places that could feature in a game. Cecil loves history and physics-based science, but also enjoys physical activity, like rowing, competing in regattas, football, and tennis. Quotes on development Cecil believes that game design involves a different creative process as compared to traditional writing. As a writer of a linear story, "all they do is to write the script," he said. "In game design, the writer should think about the gameplay and background story first before developing any of the characters. However, the constraints of an interactive medium is no excuse for a poorly constructed story, the big thing is that we have a different medium. We have to accept that we have not only huge advantages in the interactive medium but also big constraints. And these constraints often lead to some really shitty stories. And that’s why so many games have bad stories." Cecil is also very serious when doing this research to develop games that have strong ties to historic locales and myths, "I take the historical research and research of our locations very seriously and will generally visit the locations to undertake recces. Of course this is almost always a pleasure – the games aim to feature locations that are exciting and interesting." In the same interview he stated that the name Broken Sword may have been chosen because it is a symbol of peace. He also added that it might have been a fate of history that this name was chosen: "I live in the city of York in England and a few years ago a statue of Constantine the Great was erected next to the cathedral to commemorate his coronation in the city in 306AD. The statue depicts Constantine sitting atop a broken sword; it seemed a fun coincidence, or perhaps it is down to fate." References External links Revolution Software (Mobile version website) Charles Cecil at MobyGames Game Nostalgia Charles Cecil biography 1962 births British video game designers Living people Members of the Order of the British Empire People educated at Bedales School People from York Video game directors Video game writers
175205
https://en.wikipedia.org/wiki/Integrated%20services
Integrated services
In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption. IntServ specifies a fine-grained QoS system, which is often contrasted with DiffServ's coarse-grained control system. Under IntServ, every router in the system implements IntServ, and every application that requires some kind of QoS guarantee has to make an individual reservation. Flow specs describe what the reservation is for, while RSVP is the underlying mechanism to signal it across the network. Flow specs There are two parts to a flow spec: What does the traffic look like? Done in the Traffic SPECification part, also known as TSPEC. What guarantees does it need? Done in the service Request SPECification part, also known as RSPEC. TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be. TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750 Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with less tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being burstier. RSPECs specify what requirements there are for the flow: it can be normal internet 'best effort', in which case no reservation is needed. This setting is likely to be used for webpages, FTP, and similar applications. The 'Controlled Load' setting mirrors the performance of a lightly loaded network: there may be occasional glitches when two people access the same resource by chance, but generally both delay and drop rate are fairly constant at the desired rate. This setting is likely to be used by soft QoS applications. The 'Guaranteed' setting gives an absolutely bounded service, where the delay is promised to never go above a desired amount, and packets never dropped, provided the traffic stays within spec. RSVP The Resource Reservation Protocol (RSVP) is described in RFC 2205. All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a corresponding RESV (short for "Reserve") message which then traces the path backwards to the sender. The RESV message contains the flow specs. The routers between the sender and listener have to decide if they can support the reservation being requested, and, if they cannot, they send a reject message to let the listener know about it. Otherwise, once they accept the reservation they have to carry the traffic. The routers then store the nature of the flow, and also police it. This is all done in soft state, so if nothing is heard for a certain length of time, then the reader will time out and the reservation will be cancelled. This solves the problem if either the sender or the receiver crash or are shut down incorrectly without first cancelling the reservation. The individual routers may, at their option, police the traffic to check that it conforms to the flow specs. Problems In order for IntServ to work, all routers along the traffic path must support it. Furthermore, many states must be stored in each router. As a result, IntServ works on a small-scale, but as the system scales up to larger networks or the Internet, it becomes resource intensive to track of all of the reservations. One way to solve the scalability problem is by using a multi-level approach, where per-microflow resource reservation (such as resource reservation for individual users) is done in the edge network, while in the core network resources are reserved for aggregate flows only. The routers that lie between these different levels must adjust the amount of aggregate bandwidth reserved from the core network so that the reservation requests for individual flows from the edge network can be better satisfied. References "Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice" by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, ) External links - Integrated Services in the Internet Architecture: an Overview - Specification of the Controlled-Load Network Element Service - Specification of Guaranteed Quality of Service - General Characterization Parameters for Integrated Service Network Elements - Resource ReSerVation Protocol (RSVP) Cisco.com, Cisco Whitepaper about IntServ and DiffServ Internet Standards Internet architecture Quality of service
25512740
https://en.wikipedia.org/wiki/NetLabs
NetLabs
NetLabs was a software company that was founded in 1989 to address management of SNMP and CMOT (CMIP over TCP/IP) devices. CMOT was specified in RFC 1095. This RFC was subsequently obsoleted by RFC 1189. RFC 1147 mentions the company and some of its products in a catalog of network management tools. The company was acquired by Seagate Technology in 1995 as part of its Seagate Software division. History The company was founded in Los Angeles, California by Unni Warrier, Anne Lam, Jon Biggar, and Dan Ketcham. Larry Wall, the inventor of the Perl programming language, joined the company. In 1991, the company relocated to Los Altos, California. A number of employees moved from Los Angeles to the San Francisco Bay Area to continue with the company. Around this time, Unni Warrier and Anne Lam left the company, and Andre Schwager (as CEO) and Roselie Buonauro (head of Marketing) joined. After being acquired by Seagate (announced March 20, 1995), the company moved to Cupertino, California. Subsequently, Seagate Software sold off the Network and Storage Management Group to Veritas Software. Veritas in turn sold off some of the software to OpenService, Inc. OpenService changed its name Log Matrix. Products NetLabs products included: NerveCenter - network management console with correlation engine AssetManager - networked computing asset database with auto-discovery Vision - WYSIWYG network element panel simulator The network management marketplace during the years before it was acquired included HP (OpenView), Sun (SunNet Manager, and Solstice Enterprise Manager), Cabletron (Spectrum) and others. NetLabs licensed software to Sun. It also released a version of software that would allow it to coexist and augment OpenView instead of directly competing. Ultimately, one of the products, NerveCenter, is being offered by LogMatrix. References External links Network Management and Woodchucks Current NerveCenter Information Network management
10150899
https://en.wikipedia.org/wiki/Nico%20Habermann
Nico Habermann
Arie Nicolaas Habermann (26 June 1932 – 8 August 1993), often known as Nico Habermann, was a noted Dutch computer scientist. Habermann was born in Groningen, Netherlands, and earned his B.S. in mathematics and physics and M.S. in mathematics from the Free University of Amsterdam in 1953 and 1958. After working as a mathematics teacher, in 1967 he received his Ph.D. in applied mathematics from the Eindhoven University of Technology under advisor Edsger Dijkstra. In 1968, Habermann was invited to join the department of computer science at Carnegie Mellon University as a visiting research scientist. In 1969 he was appointed an associate professor, and was made full professor in 1974, acting department head in 1979, and department head from 1980 to 1988, after which he was named Dean of the new School of Computer Science (established under Allen Newell and Herbert A. Simon). He also cofounded Carnegie Mellon's Software Engineering Institute (SEI) in 1985. Habermann's research included programming languages, operating systems, and development of large software systems. He was known for his work on inter-process communication, process synchronization and deadlock avoidance, and software verification, but particularly for the programming languages ALGOL 60, BLISS, Pascal, and Ada. He also contributed to new operating systems such as Edsger Dijkstra's THE multiprogramming system, the Family of Operating Systems (FAMOS) at Carnegie Mellon, Berlin's Dynamically Adaptable System (DAS), and Unix. Habermann served as visiting professor at the University of Newcastle upon Tyne (1973) and the Technical University of Berlin (1976), and as adjunct professor at Shanghai Jiao Tong University (1986–1993). In 1994, the Computing Research Association began giving the A. Nico Habermann Award to people for work that increases the involvement of underrepresented communities in computer research. References External links Carnegie Mellon University archives Nico's student tree, 1990 1932 births 1993 deaths Scientists from Groningen (city) American computer scientists Dutch computer scientists Carnegie Mellon University faculty Software engineering researchers Dutch emigrants to the United States
51497693
https://en.wikipedia.org/wiki/QxBranch
QxBranch
QxBranch, Inc. (QxBranch) is a data analysis and quantum computing software company, based in Washington, D.C. The company provides data analytics services and research and development for quantum computing technology. On July 11, 2019, QxBranch announced that it had been acquired by Rigetti Computing, a developer of quantum integrated circuits used for quantum computers. Services QxBranch provides predictive analytics services to firms in the banking and finance industries. The company also develops software products for quantum computing technologies, including developer tools and interfaces for quantum computers, as well as quantum computing simulators. Additionally, the company provides consulting and research and development for businesses that may be improved through quantum computing methods, including in the development of adiabatic quantum computing methods for machine learning applications. History QxBranch was founded in 2014 as a joint spin-off of Shoal Group and The Tauri Group to commercialize quantum computing technology. Shoal Group (named Aerospace Concepts at the time) had a research agreement with Lockheed Martin to access a D-Wave Two quantum computer, and transitioned the access and associated technology to help found QxBranch. In August 2014, QxBranch was selected as one of eight participants for Accenture's FinTech Innovation Lab program in Hong Kong. In May 2015, Dr. Ray O Johnson, former Chief Technology Officer of Lockheed Martin Corporation, joined QxBranch as executive director. In January 2016, Australian Prime Minister Malcolm Turnbull toured QxBranch's facilities in Washington, D.C. for a demonstration of quantum computing applications. In November 2016, QxBranch, in partnership with UBS, was announced as a winning bid under the Innovate UK's Quantum Technologies Innovation Fund under the UK National Quantum Technologies Programme. The partnership is working on developing quantum algorithms for foreign exchange market trading and arbitrage. In April 2017, QxBranch, in partnership with the Commonwealth Bank of Australia, released a quantum computing simulator aiming to enable software and algorithm development to assess the feasibility and performance of applications ahead of the development of silicon-based quantum computers. The simulator was modeled on the hardware being developed by the University of New South Wales and made accessible as part of the bank’s internal cloud-based systems, allowing developers to design and evaluate software and algorithms concurrently with the hardware's on-going development. In February 2018, QxBranch demonstrated a quantum deep learning network that simulated the 2016 US Presidential Election, resulting in slightly improved forecasts of the election outcome over those of forecasting site Five Thirty Eight. In April 2018, IBM announced a collaboration with QxBranch and other companies for research access to its IBM Quantum Experience quantum computers. Locations QxBranch is headquartered in Washington, D.C. and has an engineering team in Adelaide, Australia, as well as offices in London and Hong Kong. See also List of Companies involved in Quantum Computing or Communication Timeline of quantum computing Adiabatic quantum computation References External links Official website "Decipher quantum computing". CNBC News "Quantum Computing: A Discussion with Michael Brett". Center for Strategic and International Studies "Hearing -- Disrupter Series: Quantum Computing". United States House Committee on Energy and Commerce Software companies based in Washington, D.C. Software companies established in 2014 Quantum information science Software companies of the United States
12850185
https://en.wikipedia.org/wiki/Mumble%20%28software%29
Mumble (software)
Mumble is a voice over IP (VoIP) application primarily designed for use by gamers and is similar to programs such as TeamSpeak. Mumble uses a client–server architecture which allows users to talk to each other via the same server. It has a very simple administrative interface and features high sound quality and low latency. All communication is encrypted. Mumble is free and open-source software, is cross-platform, and is released under the terms of the BSD-3-Clause license. Channel hierarchy A Mumble server (called Murmur) has a root channel and a hierarchical tree of channels beneath it. Users can temporarily connect channels to create larger virtual channels. This is useful during larger events where a small group of users may be chatting in a channel, but are linked to a common channel with other users to hear announcements. It also matches team-based first-person shooter (FPS) games. Each channel has an associated set of groups and access control lists which control user permissions. The system supports many usage scenarios, at the cost of added configuration complexity. Sound quality Mumble uses the low-latency audio codec Opus as of version 1.2.4, the codec that succeeds the previous defaults Speex and CELT. This and the rest of Mumble's design allow for low-latency communication, meaning a shorter delay between when something is said on one end and when it's heard on the other. Mumble also incorporates echo cancellation to reduce echo when using speakers or poor quality sound hardware. Security and privacy Mumble connects to a server via a TLS control channel, with the audio travelling via UDP encrypted with AES in OCB mode. As of 1.2.9 Mumble now prefers ECDHE + AES-GCM cipher suites if possible, providing Perfect Forward Secrecy. While password authentication for users is supported, since 1.2.0 it is typically eschewed in favor of strong authentication in the form of public key certificates. Overlay There is an integrated overlay for use in fullscreen applications. The overlay shows who is talking and what linked channel they are in. As of version 1.0, users could upload avatars to represent themselves in the overlay, creating a more personalized experience. As of version 1.2, the overlay works with most Direct3D 9/10 and OpenGL applications on Windows and has OpenGL support for Linux and Mac OS X. Support for DirectX 11 applications was later added. Positional audio For certain games, Mumble modifies the audio to position other players' voices according to their relative position in the game. This not only includes giving a sense of direction, but also of distance. To realise this, Mumble sends each player's in-game position to players in the same game with every audio packet. Mumble can gather the information needed to do this in two ways: it either reads the needed information directly out of the memory of the game or the games provide it themselves via the so-called link plugin interface. The link plugin provides games with a way to expose the information needed for positional audio themselves by including a small piece of source code provided by the Mumble project. Several high-profile games have implemented this functionality including many of Valve's Source Engine based games (Team Fortress 2, Day of Defeat: Source, Counter-Strike: Source, Half-Life 2: Deathmatch) and Guild Wars 2. Mobile apps Third-party mobile apps are available for Mumble, such as Mumble for iOS, Plumble for Android(F-Droid, Google Play, Note: Discontinued in 2016), and Mumla (F-Droid, Google Play). Server integration Mumble fits into existing technological and social structures. As such, the server is fully remote controllable over ZeroC Ice and gRPC. User channels as well as virtual server instances can be manipulated. The project provides a number of sample scripts illustrating the abilities of the interface as well as prefabricated scripts offering features like authenticating users using an existing phpBB or Simple Machines Forum database. The murmur server uses port 64738 TCP and UDP by default. The port number refers to the address of the reset function on a Commodore 64. An alternative minimalist implementation of the mumble-server (Murmur) is called uMurmur. It is intended for installation on embedded devices with limited resources, such as, for example, residential gateways running OpenWrt. Server hosting Like many other VoIP clients, Mumble servers can be both rented or hosted locally. Hosting a Mumble server locally requires downloading Murmur (included as an option in the Mumble installer) and launching it. Configuring the server is achieved via editing the configuration file. The configuration file holds information for the server's name, user authentication, audio quality restrictions, and port. Administrating the server from within requires a user to be given administrator rights, or can also be done by logging into the SuperUser account. Administrators within the server can add or edit rooms, manage users, and view the server's information. See also Comparison of VoIP software References External links Free software programmed in C++ Free VoIP software Internet software for Linux MacOS Internet software Software using the BSD license Voice over IP clients that use Qt VoIP software Windows Internet software
52316181
https://en.wikipedia.org/wiki/Firefox%20Focus
Firefox Focus
Firefox Focus is a free and open-source privacy-focused mobile browser from Mozilla, available for Android and iOS smartphones and tablets. Firefox Focus was initially a tracker-blocking application for mobile iOS devices, released in December 2015. It was developed into a minimalistic web browser shortly afterwards. However, it can still work solely as a tracking-blocker in the background of the Safari browser on Apple devices. In June 2017, the first release for Android went public and was downloaded over one million times in the first month. As of January 2017, it is available in 27 languages. Since July 2018, Firefox Focus is preinstalled on the BlackBerry Key2 as part of the application Locker. To bypass content-blocker restrictions from Apple, Firefox Focus uses the UIWebView API on iOS devices. On Android, it used the Blink engine in version 6.x and earlier, and it has used GeckoView since version 7.0. Tracking protection Firefox Focus is designed to block online trackers, including third-party advertising, with the end goal of both improving browsing speed and protecting users' privacy. Content blocking is achieved using the Disconnect block lists. The blocking of third-party trackers (except "other content trackers") is enabled by default. In the other Firefox browsers, users have to enable the Tracking Protection feature inside the browser preferences manually. Users can also view types of trackers on a page by tapping on the shield icon next to the URL bar. A panel will pop-up and shows what kind of trackers are on that page: ad trackers, analytics trackers, social trackers or content trackers. On December 20, 2018, Mozilla announced that Firefox Focus now checks all URLs against the Google Safe Browsing service to help prevent people from accessing fraudulent sites. Functions Firefox Focus can be set as content-blocker in the Safari web browser options. After activating the Safari integration in the Firefox Focus settings, it will disable trackers automatically in the background when browsing using the Safari browser. Pressing the trash icon while browsing will delete all session data and refer to the startscreen, that is displaying the customisable search bar. Tabs can be opened by long-pressing a URL on a website. Favourite links can be set on the homescreen of the device. Firefox Focus contains a option called telemetry. By activating it, users allow Mozilla to collect and receive non personal-identifiable information to improve Firefox. Due to privacy concerns, telemetry of Firefox Klar is disabled by default. October 15, 2018, Mozilla announced that Firefox Focus is being updated with a new search feature and visual design. That means the browser will conceptually tell users about its features and options. Minimum device requirements Apple mobile devices There are some minimum hardware requirements to remove tracking contents. The mechanism needs hardware that can handle the extra load of content blocking so it only works on 64-bit devices running iOS 9 and above including: iPhone 5s and newer iPad Air and newer iPad mini 2 and newer iPod touch from the 6th generation , iOS 11.4 or above is required to download Firefox Focus on the App Store. Android mobile devices Android version 5.0 or higher is required to use the latest version of Firefox Focus. Firefox Klar "Firefox Klar" is the modified version with telemetry disabled released for German-speaking countries in order to avoid the ambiguity with the German news magazine FOCUS. F-Droid uses this flavor due to telemetry being disabled by default in Klar. Gallery See also Firefox, the desktop web browser Firefox for Android, a project for Android smartphones and tablet computers Firefox for iOS, a project for iOS smartphones and tablets Safari, the default web browser for iOS Mobile browser References External links Firefox Focus for iOS on the App Store Firefox Focus for Android on Google Play Firefox Klar for Android on F-Droid Firefox Android (operating system) software BlackBerry software Free and open-source Android software Free multilingual software Free software programmed in Objective-C Free software programmed in Swift Free web browsers IOS software IOS web browsers Mobile web browsers Software using the Mozilla license 2016 software
208753
https://en.wikipedia.org/wiki/List%20of%20computers%20running%20CP/M
List of computers running CP/M
Many microcomputer makes and models could run some version or derivation of the CP/M disk operating system. Eight-bit computers running CP/M 80 were built around an Intel 8080/8085, Zilog Z80, or compatible CPU. CP/M 86 ran on the Intel 8086 and 8088. Some computers were suitable for CP/M as delivered. Others needed hardware modifications such as a memory expansion or modification, new boot ROMs, or the addition of a floppy disk drive. A few very popular home computers using processors not supported by CP/M had plug-in Z80 or compatible processors, allowing them to use CP/M and retaining the base machine's keyboard, peripherals, and sometimes video display and memory. The following is an alphabetical list of some computers running CP/M. A Ai Electronics ABC-24 / ABC-26 (Japan, running Dosket, CP/M & M/PM) Action Computer Enterprises ACE-1000 Action Computer Enterprises Discovery D-500 (CP/M-80 on each of up to 4 user processors, DPC/OS on service processor) Action Computer Enterprises Discovery D-1600 (CP/M-80 on each of up to 15 user processors, DPC/OS on service processor) Actrix Computer Corp. Actrix (Access Matrix) Advanced Digital Corporation Super Six Allen Bradley Advisor - Industrial Programmable controller graphical user interface (development mode only), fl. ca. 1985 Alspa MITS Altair 8800 Altos 580 Amada Aries 222/245 CNC turret punch press Amstrad CPC 464 (w/DDI-1 disk drive interface), 664, 6128, 6128Plus Amstrad PCW 8256/8512/9512/9256/10 Amust Executive 816 Apple II (with a Z-80 card like the Microsoft SoftCard; on some clones a SoftCard equivalent was built into the mainboard) Apple III (with a Z-80 card like the Apple SoftCard III) Applied Technology MicroBee (56KB+ RAM models) Aster CT-80 Atari 800 and XL/XE (with ATR8000 module, LDW Super 2000, CA-2001 or Indus GT disk drives expanded to 64k) Atari ST - runs GEMDOS, which was DRI's more advanced replacement for CP/M for use with their GEM GUI ATM-turbo - Soviet/Russian clone of ZX-Spectrum with extension graphic and 512/1024Kb RAM: CP/M 2.2 in ROM AT&T 6300 with CPU 3 upgrade AT&T 6300 PLUS B Basis 108 BBC Micro (with external Z80 module) Beehive Topper II Bigboard BMC if-800 Bondwell II,12, 14 BT Merlin M2215 series based on ICL PC-2 (CP/M) (also ran MP/M II+) BT Merlin M4000 series based on Logica Kennett (Concurrent CP/M-86) C Camputers Lynx (96k/128k models) Casio FP1000 FL CASU Super-C - Z80 based with a 21 slot S100 bus (Networkable with MP/M) - UK manufactured CASU Mini-C - Z80 based with a 7 slot S100 bus and twin 8" floppy disk drives (Networkable with MP/M) - UK manufactured Challenger III - Ohio Scientific OSI-CP/M Cifer Systems 2684, 2887, 1887 - Melksham, England. CIP04 - Romanian computer CoBra - Romanian computer Coleco Adam (with a CP/M digital data pack) Comart Communicator (CP/M-80), C-Frame, K-Frame, Workstation and Quad (Concurrent CP/M-86) Commodore 64 (with Z80 plug-in cartridge) Commodore 128 (using its internal Z80 processor--along with its 8502--ran CP/M+ which supported memory paging) Compaq Portable - was available with CP/M as a factory installed option. Compis Compupro Cromemco C't180 HD64180 ECB-System (CP/M2.2 & 3.x) Cub-Z - Romanian made computer D Datamax UV-1R Data Soft PCS 80 and VDP 80 (France, 1977) Data Technology Industries "Associate" (USA, 1982) DEC Rainbow 100/100+ (could run both CP/M and CP/M-86) DEC VT180 (aka Personal Computing Option, aka 'Robin') Digital Group DG1 E Eagle Computer Eagle I, II, III, IV, V ELWRO 800 Junior Polish clone of Sinclair ZX spectrum—running CP/J, a CP/M derivative with simple networking abilities ENER 1000 Enterprise 128 (with EXDOS/IS-DOS extensions) Epic Episode Epson PX-4, PX-8 (Geneva), QX-10, QX-16 Eracom ERA-50 & ERA-60 with encrypted disks (Eracom Corporation, Australia) Exidy Sorcerer F Ferguson Big Board FK-1 - Czech microcomputer Franklin ACE 1000 (with Microsoft Z-80 SoftCard) Franklin ACE 1200 (includes a PCPI Applicard clone) Fujitsu Micro 7 (with Z-80 plug-in card) G General Processor GPS5 (Italy, running CP/M 86 - Concurrent CP/M 86) General Processor Model T (Italy, 1980 running CP/M 80) Grundy NewBrain Genie II, IIs, III, IIIs Goupil G3 G.Z.E. UNIMOR Bosman 8 (Poland, 1987 running CPM/R, CP/M 2.2 compatible) Gemini 801 and Gemini Galaxy (UK, 1981-1983 running CP/M 2.2 and MP/M) H HBN Computer (Le) Guépard HC-88 HC-2000 Heath/Zenith Heathkit H90|H90 and Heathkit H89/Zenith Z-89 Hewlett-Packard HP-85 / HP-87 (with addition of CP/M Module containing Z80) Hewlett-Packard HP-125 and HP-120, one Z80 each for CP/M and the inherent HP terminal Hobbit Holborn 6100 Holborn 9100 (Netherlands, 1981) Husky Computers Ltd Hunter (1 and 2, 16), Hawk I Ibex 7150 and other models ICL PC-1 (CP/M) (also ran MP/M) ICL PC-2 (CP/M) (also ran MP/M II+) ICL PC-16 (Concurrent CP/M-86) ICL PC Quattro (Concurrent CP/M-86) ICL DRS8801 (CP/M-86) ICL DRS300 (Concurrent CP/M-86) ICL DRS20 (CP/M or Concurrent CP/M-86) IBM Displaywriter IBM PC (CP/M-86 only; CP/M-80 with the Baby Blue Z-80 card) IMSAI 8080 IMSAI VDP-80 (8085 3 MHz) Intel MDS-80 Intertec Superbrain Iotec Iskra Delta Partner Itautec I-7000, I-7000G, I-7000 Jr. (SIM/M) ITT 3030 Ivel Ultra J JET-80 (Swedish Made Computer) Juku (microcomputer)|Juku E5101–E5104 came with an adaptation of CP/M called EKDOS JUNIOR Romanian Computer K Kaypro KC 85/2-4 Kontron PSI98 (KOS & CP/M2.2) Korvet (Корвет) — Soviet PC L Labtam LNW-80 LOBO Max-80 Logica VTS 2200 (CP/M-86) Logica VTS Kennet (Concurrent CP/M-86) LOS 25 (10 MB harddisc) Luxor ABC 802, ABC 806 (Sweden, 1981) M MCP (128K, Z80, S-100 bus) MC CP/M Computer (Z80 ECB-System, CP/M2.2) Megatel Quark Memotech MTX MicroBee Micro Craft Dimension 68000 (CP/M-68K, and CP/M-80 with optional Z80 card) Micromation M/System, Mariner and MiSystem (MP/M and MP/M II) Micromint SB180 (Hitachi HD64180 CPU) Mikromeri Spectra Z (Finland) Morrow Designs (MD2, MD3, MD11) MSX (some MSX-standard machines ran the CP/M-like MSX-DOS) Mycron 3 M 18 Romanian Computer M 118 Romanian Computer MK 45 Polish computer based on MCY7880 N N8VEM N8VEM ZetaSBC Nabu Network PC Nascom 1, 2 NCR Decision Mate V NEC APC NEC PC-8001 Mk II NEC PC-8801 Nelma Persona NorthStar Advantage (all in one computer) NorthStar Horizon (S-100) Nokia MikroMikko 1 NYLAC Computers NYLAC (S-100) O OKI IF-800 (Z80 5 MHz) Second Z80 on video controller Olivetti ETV300 Olivetti M20 (CP/M-8000) Osborne 1 Osborne Executive Osborne Vixen Otrona Attaché Otrona Attaché 8:16 P P112 Philips P2000T Philips 3003/3004 Piccolo RC-700|Piccolo Piccoline RC-759 Pied Piper Polymorphic Systems 8813 The Portable Computer Co (AU) PortaPak Profi - Soviet/Russian clone of ZX-Spectrum with extension grafic and 1024Kb RAM: CP/M plus in ROM Processor Technology Sol-20 (optional) Pulsars Little Big Board Q Quasar Data Products QDP-300 R RAIR "Black Box" (also ran MP/M) Regnecentralen Piccolo RC-700 Regnecentralen Piccoline RC-759 Research Machines 380Z and LINK 480Z Rex Computer Company REX 1 Robotron A 5120 Robotron KC 85, KC 87 Robotron PC 1715 Royal Business Machines 7000 "Friday" S SAGE II / IV CP/M-68K SAM Coupé - (Pro-Dos = CP/M 2.2) Samsung SPC-1000 Sanyo MBC families (i.e. MBC-1150) SBS 8000 Scandis Seequa Chameleon Sharp MZ series Sharp X1 series Sirius 1 (sold in the U.S. as the Victor 9000) Software Publisher's ATR8000 Sony SMC-70 Sord M5 has CP/M as an option, CP/M-68K standard for the M68/M68MX Spectravideo SV-318/328 Sperry Univac UTS 40 CP/M 2.2 - Zilog 80 Stride 400 series CP/M-68K was one of many operating systems on these ZX Spectrum family (built by Amstrad) T Tatung Einstein TC-01 (runs Xtal/DOS which is CP/M compatible) Tandy TRS-80 Technical Design Labs (TDL) XITAN TeleData (Z80 Laptop) Telenova Compis (CP/M-86) Teleputer III Televideo TS-80x Series Televideo TS-160x Series Texas Instruments TI-99/4A (with the MorningStar CP/M card or the Foundation CP/M card) Tiki-100 (runs KP/M, or later renamed TIKO. A CP/M 2.2 Clone.) TIM-011 TIM-S Plus Timex FDD3000 (on Z80 CPU) with ZX Spectrum as terminal. Toshiba T100 Toshiba T200 Toshiba T200 C-5 Toshiba T200 C-20 Toshiba T250 Transtec BC2 Triumph-Adler AlphaTronic P1/P2 Triumph-Adler AlphaTronic P3/P4 Triumph-Adler AlphaTronic P30/P40 Triumph-Adler AlphaTronic PC (CPU was a Hitachi Z80 clone) Tycom Microframe U Unitron 8000, a dual processor machine built São Paulo in the early 1980s. The Unitron could boot either as an Apple II clone (using a clone 6502 processor) or in CP/M (using the Z80). V Vector-06C (Intel 8080, 16 color graphics, made in USSR) Vector Graphic Vector Graphic Corporation Vector Model 1,2 (Internal Model),3, Model 4 (Z80 & 8088 CP/M, CP/M-86 & PCDOS), Model 10 (Multiuser) Victor 9000 (sold as the Sirius 1 in Europe) Video Technology Laser 500/700 Visual Technology (Lowell, Ma) Visual 1050, 1100 (Not Released) W Wave Mate Bullet Welect 80.2 (France, 1982) West PC-800 X Xerox 820 Xerox Sunrise 1800 / 1805 Y Yodobashi Formula-1 Z Zenith Data Systems Z-89 (aka Heathkit H89) Zenith Data Systems Z-100 (CP/M-85) Zorba References External links Intel iPDS-100 Using CP/M-Video CP M
635213
https://en.wikipedia.org/wiki/Dartmouth%20Time%20Sharing%20System
Dartmouth Time Sharing System
The Dartmouth Time-Sharing System (DTSS) is a discontinued operating system first developed at Dartmouth College between 1963 and 1964. It was the first successful large-scale time-sharing system to be implemented, and was also the system for which the BASIC language was developed. DTSS was developed continually over the next decade, reimplemented on several generations of computers, and finally shut down in 1999. Early history Professors John Kemeny and Thomas Kurtz at Dartmouth College purchased a Royal McBee LGP-30 computer around 1959, which was programmed by undergraduates in assembly language. Kurtz and four students programmed the Dartmouth ALGOL 30 compiler, an implementation of the ALGOL 58 programming language, which two of the students, Stephen Garland and Anthony Knapp then evolved into the SCALP (Self Contained ALgol Processor) language between 1962-1964. Kemeny and freshman Sidney Marshall collaborated to create DOPE (Dartmouth Oversimplified Programming Experiment), which was used in large freshman courses. Kurtz approached Kemeny in either 1961 or 1962, with the following proposal: all Dartmouth students would have access to computing, it should be free and open-access, and this could be accomplished by creating a time-sharing system (which Kurtz had learned about from colleague John McCarthy at MIT, who suggested "why don't you guys do timesharing?"). Although it has been stated that DTSS was inspired by a PDP-1-based time-sharing system at Bolt, Beranek and Newman, there is no evidence that this is true. In 1962, Kemeny and Kurtz submitted a proposal for the development of a new time-sharing system to NSF (which was ultimately funded in 1964). They had sufficient assurance that both Dartmouth and NSF would support the system that they signed a contract with GE and began preliminary work in 1963, before the proposal was funded. In particular, they evaluated candidate computers from Bendix, GE, and IBM, and settled upon the GE-225 system paired with a DATANET-30 communications processor. This two-processor approach was unorthodox, and Kemeny later recalled: "At that time, many experts at GE and elsewhere, tried to convince us that the route of the two-computer solution was wasteful and inefficient." In essence, the DATANET-30 provided the user-interface and scheduler, while user programs ran in the GE-225. Its implementation began in 1963, by a student team under the direction of Kemeny and Kurtz with the aim of providing easy access to computing facilities for all members of the college. The GE-225 and DATANET-30 computers arrived in February 1964. Two students, John McGeachie and Michael Busch, wrote the operating systems for the DATANET-30 and GE-225; Kemeny contributed the BASIC compiler. The system became operational in mid March, and on May 1, 1964, at 4:00 a.m., the system began operations. In autumn of 1964, hundreds of freshman students began to use the system via 20 teletypes, with access at Hanover High School via one additional teletype; later that autumn the GE-225 computer was replaced with a faster GE-235 computer with minimal issues. By summer of 1965, the system could support forty simultaneous users. A Dartmouth document from October 1964, later revised by GE, describes the overall DTSS architecture: "The program in the Datanet-30 is divided into two parts, a real-time part and a spare-time part. The real-time part is entered via clock controlled interrupt 110 times per second in order to scan the teletype lines. As characters are completed, the real-time part collects them into messages and, when a "return" character is encountered, interprets the message. If it is a line in the program, nothing is done. If the message is a command, a spare-time task to start carrying out the command is set up and inserted in the spare-time list. If there is not enough time to complete this setting-up, the real-time part will complete the set-up during the next real-time period. The spare-time portion carries out the spare-time tasks, which include mainly disc operations and certain teletype operations. In the GE-235 part there is resident compiler system that acts as a translator, and a resident executive routine to manage the disc input-output operations and to perform other functions. The executive system permits simultaneous use of the card equipment, the tape drives, and the high-speed printer during time-sharing through interrupt processing. Two algebraic languages, BASIC and ALGOL, are available, with FORTRAN planned for September 1965. These one-pass compilers are rather fast, requiring usually 1 to 4 seconds per compilation." User interface design Kemeny and Kurtz observed that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer", so DTSS's design emphasized immediate feedback. Many of its users thus believed that their terminal was the computer and that, Kemeny wrote, "the machine is there just to serve him and that he has complete control of the entire system". Because of the educational aims, ease of use was a priority in DTSS design. It implemented the world's first Integrated Design Environment (IDE). Any line typed in by the user, and beginning with a line number, was added to the program, replacing any previously stored line with the same number; anything else was taken as a command and immediately executed. Lines which consisted solely of a line number weren't stored but did remove any previously stored line with the same number. This method of editing provided a simple and easy to use service that allowed large numbers of teleprinters as the terminal units for the Dartmouth Timesharing system. IDE commands included CATALOG – to list previously named programs in storage LIST – to display the current program in memory NEW – to name and begin writing a program in memory OLD – to copy a previously named program from storage to memory RENAME – to change the name of the program in memory RUN – to compile and execute the current program in memory SAVE – to copy the current program from memory to storage SCRATCH – to clear the content of the current program from memory UNSAVE – to remove the current program from storage These commands were often believed to be part of the BASIC language by users, but in fact they were part of the time sharing system and were also used when preparing ALGOL or FORTRAN programs via the DTSS terminals. GE-Dartmouth relationship Kemeny and Kurtz had originally hoped that GE would enter into a research partnership, and to that end Kurtz and student Anthony Knapp authored a document about their proposed system design, which they presented to GE's Phoenix office in 1962. However, GE rejected the partnership, and its October 1962 proposal to Dartmouth was framed solely as a commercial sale. That said, GE and Dartmouth promoted the operational Dartmouth Time Sharing System in October 1964 at the Fall Joint Computer Conference in San Francisco, with three teletypes connected to the Dartmouth system in Hanover. From December 1964 into January 1965, two Dartmouth students installed working copies of DTSS and BASIC on GE computers in Phoenix. In early 1965, GE began to advertise timesharing services on its GE-265 system (GE 235 + DATANET 30), including BASIC and Dartmouth Algol, later renaming it the GE Mark I time-sharing system. Over the next few years, GE opened 25 computer centers in the United States and elsewhere, serving over fifty thousand users. The Computer History Museum's Corporate Histories Collection describes GE's Mark I history this way: The precursor of General Electric Information Services began as a business unit within General Electric formed to sell excess computer time on the computers used to give customer demos. In 1965, Warner Sinback recommended that they begin to sell time-sharing services using the time-sharing system (Mark 1) developed at Dartmouth on a General Electric 265 computer. The service was an instant success and by 1968, GEIS had 40% of the $ 70 million time-sharing market. The service continued to grow, and over time migrated to the GE developed Mark II and Mark III operating systems running on large mainframe computers. Dartmouth Time Sharing System, version 2 From 1966-1968, DTSS was reimplemented on the GE 635, still using the DATANET-30 for terminal control. The GE 635 system was delivered in November 1966. By October 1967, it was providing a service based on Phase I software, jointly developed by Dartmouth and GE, which GE subsequently marketed as the GE Mark II system. In parallel with this work, Dartmouth embarked in 1967 on the development of Phase II under the direction of Professor John Kemeny, with programming carried out by students and faculty. Phase II of the Dartmouth Time-Sharing System replaced Phase I on 1st April 1969 at Dartmouth. As described in 1969, the new DTSS architecture was influenced by three criteria: The experiences with the 265 system. The published concepts of the Multics system. A realization of the limitations of the capabilities of a part-time staff of Dartmouth students and faculty members. This new version was completely different internally from the earlier DTSS, but provided a near-identical user interface to enable a smooth transition for users and course materials. The 635 version provided interactive time-sharing to up to nearly 300 simultaneous users in the 1970s, a very large number at the time, and operated at eleven commercial and academic sites in the U.S.A., Canada and Europe. As it evolved in the 1970s, later versions moved to Honeywell 6000 series mainframes (1973) and Honeywell 716 communication processors (1974). In 1976 the GE-635 system was replaced by a Honeywell 66/40A computer. It remained in operation until the end of 1999. DTSS, version 2, included a novel form of inter-process communication called "communication files". They significantly antedated Unix pipes, as design documents put their conceptual origin sometime in 1967, and were described briefly in a 1969 conference: A communications file allows two jobs to interact directly without the use of secondary storage. A communications file has one end in each of two jobs. It is the software analog of a channel-to-channel adaptor. This structure allows job-to-job interactions using the same procedures as for more conventional files. The two ends are labeled master end and slave end. A job at the slave end of a communications file cannot easily distinguish this file from a conventional file. Since a job at the master end of a communications file can control and monitor all data transmitted on that file, a master end job can simulate a data file, thereby providing a useful debugging aid and also providing a convenient mechanism for interfacing running jobs to unexpected data structures. Communication files supported read, write and close operations, but also synchronous and asynchronous data transfer, random access, status inquiries, out-of-band signaling, error reporting, and access control, with the precise semantics of each operation determined by the master process. As Douglas McIlroy notes: "In this, [communication files were] more akin to Plan 9's 9P protocol than to familiar IO." A notable application of communication files was in support of multi-user conferences, which behaved somewhat like conference phone calls, and were implemented entirely as user-space application programs. The Kiewit Network As mentioned above, Hanover High School was connected to DTSS from the system's beginning. Over the next decade, many other high schools and colleges were connected to DTSS via the Kiewit Network, named for Peter Kiewit, donor of funds for the Kiewit Computation Center that housed the DTSS computers and staff. These schools connected to DTSS via one or more teletypes, modems, and dial-up telephone lines. During this time, Dartmouth ran active programs to engage and train high school teachers in using computation within their courses. By 1967, the following high schools had joined the Kiewit Network: Hanover High School, The Holderness School, Mascoma Valley Regional High School, Kimball Union Academy, Mount Hermon School, Phillips Andover Academy, Phillips Exeter Academy, St. Paul's School, and Vermont Academy. This group expanded in the Dartmouth Secondary School Project, funded by the NSF during 1967-1968, which added the following New England high schools: Cape Elizabeth High School, Concord High School, Hartford High School (Vermont), Keene High School, Lebanon High School, Loomis School, Manchester Central High School, Rutland High School, St. Johnsbury Academy, South Portland High School, and Timberlane High School. From 1968-1970, Dartmouth added a number of colleges to the Kiewit Network via its Regional College Consortium. They included: Bates College, Berkshire Community College, Bowdoin College, Colby Junior College, Middlebury College, Mount Holyoke College, New England College, Norwich University, the University of Vermont, and Vermont Technical College. By 1971, the Kiewit Network connected 30 high schools and 20 colleges in New England, New York, and New Jersey. At that time, DTSS was supporting over 30,000 users, of which only 3,000 were at Dartmouth College. By 1973, the Kiewit Network had expanded to include schools in Illinois, Michigan, upstate New York, Ohio, and Montreal, Canada. Usage 57% of DTSS use was for courses and 16% for research. Kemeny and Kurtz intended for students in technical and nontechnical fields to use DTSS. They arranged for the second trimester of the freshman mathematics class to include a requirement for writing and debugging four Dartmouth BASIC programs. By 1968, more than 80% of Dartmouth students had experience in computer programming. 80 classes included "official" computer use, including those in engineering, classics, geography, sociology, and Spanish. 27% of DTSS use was for casual use and entertainment, which the university stated "is in no sense regarded as frivolous", as such activities encouraged users to become familiar with and not fear the computer. The library of about 500 programs as of 1968 included, Kemeny and Kurtz reported, "many games". They were pleased to find that 40% of all faculty members—not just those in technical fields—used DTSS, and that many students continued using the system after no longer being required to. Kemeny—by then the university president—wrote in a 1971 brochure describing the system that just as a student could enter Baker Memorial Library and borrow a book without asking permission or explaining his purpose, "any student may walk into Kiewit Computation Center, sit down at a console, and use the time-sharing system. No one will ask if he is solving a serious research problem, doing his homework the easy way, playing a game of football, or writing a letter to his girlfriend". By the 1967–68 school year, in addition to 2,600 Dartmouth users, 5,550 people at ten universities and 23 high schools accessed DTSS. By the early 1970s the campus had more than 150 terminals in 25 buildings, including portable units for patients at the campus infirmary. About 2,000 users logged into DTSS each day; 80% of students and 70% of faculty used the system each year. The off-campus Dartmouth Educational Time-Sharing Network included users with 79 terminals at 30 high schools and 20 universities, including Middlebury College, Phillips Andover, Mount Holyoke College, Goddard College, the United States Merchant Marine Academy, the United States Naval Academy, Bates College, the Dartmouth Club of New York, and a Dartmouth affiliate in Jersey City, New Jersey, sharing DTSS with Dartmouth people. Because BASIC did not change, the system remained compatible with older applications; Kemeny reported in 1974 that programs he had written in 1964 would still run. The system allowed email-type messages to be passed between users and real-time chat via a precursor to the Unix talk program. By 1980, supported languages and systems included: 7MAP – DTSS 716 Macro Assembly Program 8MAP – DTSS PDP-8 Macro Assembly Program 9MAP – DTSS PDP-9 Macro Assembly Program ALGOL – DTSS ALGOL 60 ALGOL68 – DTSS ALGOL 68 APL – DTSS APL BASIC – BASIC CHESS – Chess-playing Program COBOL – DTSS COBOL COURSE – IBM-compatible COURSEWRITER III author program CPS – 'Complete Programming System' developed at Bates College CROSREF – Program cross-references DDT – Honeywell 600/6000 machine language debugging program DMAP – DTSS DATANET-30 Macro Assembly Program DTRAC – DTSS Text Reckoning and Compiling Language DXPL – DTSS XPL Translator Writing System DYNAMO – DYNAMO Simulation language FORTRAN – DTSS FORTRAN GMAP – Honeywell 600/6000 Macro Assembly Program LISP – DTSS LISP MIX – DTSS MIX Assembler PILOT – DTSS PILOT course writer PL/I – DTSS PL/I PLOT – Graphics system for use with BASIC or SBASIC SBASIC – Structured BASIC SIX – FORTRAN 76 SNOBOL – DTSS SNOBOL4 DTSS today In 2000, a project to recreate the DTSS system on a simulator was undertaken and as a result DTSS is now available for Microsoft Windows systems and for the Apple Macintosh computer. See also Timeline of operating systems Time-sharing system evolution References Time-sharing operating systems Discontinued operating systems 1964 software Computer-related introductions in 1964 Dartmouth College history
463432
https://en.wikipedia.org/wiki/Native%20POSIX%20Thread%20Library
Native POSIX Thread Library
The Native POSIX Thread Library (NPTL) is an implementation of the POSIX Threads specification for the Linux operating system. History Before the 2.6 version of the Linux kernel, processes were the schedulable entities, and there were no special facilities for threads. However, it did have a system call — — which creates a copy of the calling process where the copy shares the address space of the caller. The LinuxThreads project used this system call to provide kernel-level threads (most of the previous thread implementations in Linux worked entirely in userland). Unfortunately, it only partially complied with POSIX, particularly in the areas of signal handling, scheduling, and inter-process synchronization primitives. To improve upon LinuxThreads, it was clear that some kernel support and a new threading library would be required. Two competing projects were started to address the requirement: NGPT (Next Generation POSIX Threads) worked on by a team which included developers from IBM, and NPTL by developers at Red Hat. The NGPT team collaborated closely with the NPTL team and combined the best features of both implementations into NPTL. The NGPT project was subsequently abandoned in mid-2003 after merging its best features into NPTL. NPTL was first released in Red Hat Linux 9. Old-style Linux POSIX threading is known for having trouble with threads that refuse to yield to the system occasionally, because it does not take the opportunity to preempt them when it arises, something that Windows was known to do better at the time. Red Hat claimed that NPTL fixed this problem in an article on the Java website about Java on Red Hat Linux 9. NPTL has been part of Red Hat Enterprise Linux since version 3, and in the Linux kernel since version 2.6. It is now a fully integrated part of the GNU C Library. There exists a tracing tool for NPTL, called POSIX Thread Trace Tool (PTT). And an Open POSIX Test Suite (OPTS) was written for testing the NPTL library against the POSIX standard. Design NPTL uses a similar approach to LinuxThreads, in that the primary abstraction known by the kernel is still a process, and new threads are created with the clone() system call (called from the NPTL library). However, NPTL requires specialized kernel support to implement (for example) the contended case of synchronisation primitives which might require threads to sleep and wake again. The primitive used for this is known as a futex. NPTL is a so-called 1×1 threads library, in that threads created by the user (via the pthread_create() library function) are in 1-1 correspondence with schedulable entities in the kernel (tasks, in the Linux case). This is the simplest possible threading implementation. An alternative to NPTL's 1×1 model is the m×n model. See also LinuxThreads Library (computer science) Green threads References External links NPTL Trace Tool OpenSource tool to trace and debug multithreaded applications using the NPTL. Linux kernel C POSIX library Threads (computing)
1338556
https://en.wikipedia.org/wiki/Gateway%20%28telecommunications%29
Gateway (telecommunications)
A gateway is a piece of networking hardware or software used in telecommunications networks that allows data to flow from one discrete network to another. Gateways are distinct from routers or switches in that they communicate using more than one protocol to connect multiple networks and can operate at any of the seven layers of the open systems interconnection model (OSI). The term gateway can also loosely refer to a computer or computer program configured to perform the tasks of a gateway, such as a default gateway or router, and in the case of HTTP, gateway is also often used as a synonym for reverse proxy. Network gateway A network gateway provides interoperability between networks and contains devices, such as protocol translators, impedance matchers, rate converters, fault isolators, or signal translators. A network gateway requires the establishment of mutually acceptable administrative procedures between the networks using the gateway. Network gateways, known as protocol translation gateways or mapping gateways, can perform protocol conversions to connect networks with different network protocol technologies. For example, a network gateway connects an office or home intranet to the Internet. If an office or home computer user wants to load a web page, at least two network gateways are accessed—one to get from the office or home network to the Internet and one to get from the Internet to the computer that serves the web page. In enterprise networks, a network gateway usually also acts as a proxy server and a firewall. On Microsoft Windows, the Internet Connection Sharing feature allows a computer to act as a gateway by offering a connection between the Internet and an internal network. IP gate On an Internet Protocol (IP) network, IP packets with a destination outside a given subnet mask are sent to the network gateway. For example, if a private network has a base IPv4 address of 192.168.1.1 and has a subnet mask of 255.255.255.0, then any data addressed to an IP address outside of 192.168.1.0 is sent to the network gateway. IPv6 networks work in a similar way. While forwarding an IP packet to another network, the gateway may perform network address translation. Internet-to-orbit gateway An Internet-to-orbit gateway (I2O) connects computers or devices on the Internet to computer systems orbiting Earth, such as satellites or manned spacecraft. Project HERMES, run by the Ecuadorian Civilian Space Agency, was first to implement this kind of gateway on June 6, 2009. Project HERMES has a maximum coverage of 22,000 km and can transmit voice and data. The Global Educational Network for Satellite Operations (GENSO) is another type of I2O gateway. Cloud storage gateway A cloud storage gateway is a network appliance or server which translates cloud storage APIs such as SOAP or REST to block-based storage protocols such as iSCSI, Fiber Channel or file-based interfaces such as NFS or CIFS. Cloud storage gateways enable companies to integrate private cloud storage into applications without moving the applications into a public cloud, thereby simplifying data protection. IoT gateway An Internet of things (IoT) gateway provides the bridge (protocol converter) between IoT devices in the field, the cloud, and user equipment such as smartphones. The IoT gateway provides a communication link between the field and the cloud, and may provide offline services and real-time control of devices in the field. To achieve sustainable interoperability in the Internet of things ecosystem, two dominant architectures for data exchange protocols are used: bus-based (DDS, REST, XMPP) and broker-based (AMQP, CoAP, MQTT, JMI). Protocols that support information exchange between interoperable domains are classified as message-centric (AMQP, MQTT, JMS, REST) or data-centric (DDS, CoAP, XMPP). Interconnected devices communicate using lightweight protocols that don't require extensive CPU resources. C, Java, Python and some scripting languages are the preferred choices of IoT application developers. IoT nodes use separate IoT gateways to handle protocol conversion, database storage or decision making (e.g. collision handling), in order to supplement the low intelligence of devices. See also References Sources Federal Standard 1037C MIL-STD-188 Internet architecture Networking hardware Routers (computing) Videotelephony
35773358
https://en.wikipedia.org/wiki/User%20profile
User profile
A user profile is a collection of settings and information associated with a user. It contains critical information that is used to identify an individual, such as their name, age, portrait photograph and individual characteristics such as knowledge or expertise. User profiles are most commonly present on social media websites such as Facebook, Instagram, and LinkedIn; and serve as voluntary digital identity of an individual, highlighting their key features and traits. In personal computing and operating systems, user profiles serve to categorise files, settings, and documents by individual user environments, known as ‘accounts’, allowing the operating system to be more friendly and catered to the user. Physical user profiles serve as identity documents such as passports, driving licenses and legal documents that are used to identify an individual under the legal system. A user profile can also be considered as the computer representation of a user model. A user model is a (data) structure that is used to capture certain characteristics about an individual user, and the process of obtaining the user profile is called user modeling or profiling. Origin The origin of User Profiles can be traced to the origin of the passport, or an identity document (ID) made mandatory in 1920, after World War 1. The passport served as an official government record of an individual. Consequently, Immigration Act of 1924 was established to identify an individual’s country of origin. In the 21st century, passports have now become a highly sought-after commodity as it is widely accepted as a source of verifying an individual’s identity under the legal system. With the advent of digital revolution and social media websites, user profiles have transitioned to an organised group of data describing the interaction between a user and a system. Social media sites like Instagram allow individuals to create profiles that are representative of their desired personality and image. Filling all fields of profile information may not be necessary to create a meaningful self-presentation, which grants individual more control over of the identity they wish to present by displaying the most meaningful attributes. A personal user profile is a key aspect of an individual's social networking experience, around which his/her public identity is built. Types of user profiles A user profile can be of any format if it contains information, settings and/or characteristics specific to an individual. Most popular user profiles include those on photo and video sharing websites such as Facebook and Instagram, accounts on operating systems, such as those on Windows and MacOS and physical documents such as passports and driving licenses. Social media Effectively structured user profiles on social media channels such as Instagram and Facebook offer a way for people to form impressions about someone that is predictive or similarly meeting them offline. The condensed format of social media profiles allows for quick filtering of millions of profiles by matching individuals by similar characteristics and interests; information provided upon sign up. A research conducted highlights that only a “thin slice” of information is required to form an impression about an individual online (Stecher and Counts 2008). Online user profiles eliminate the complexity of interaction that is present in ‘face-to-face’ meetings such as behavioural, facial, and environmental information, resulting in increased predictiveness of user personality. Dating apps and websites solely rely on an individual’s user profile and the information provided to form interactions and communication with others on the platform. Despite having control over presented information, lying is minimal in online dating contexts (Hancock, Toma and Ellison, 2007). Apps such as Bumble allow users to ‘match’ with other individuals based on their characteristics and selected filters that allow users to narrow the spectrum of search to their preference. Information for a user’s profile is voluntarily specified by the user and includes information such as height, interests, photographs, gender or education. The requirement of information varies respective to each platform, and there surrounds little consensus to an appropriate amount of information for a condensed user profile. Universally, all social networking platforms display an individual’s profile picture and an “about me” page that allows for self-expression. Influencers Influencer user profiles are third party endorsers who shape audience attitudes and decisions through social media content such as photos, blogs and tweets. Social Media Influencers (SMI) often hold a significant following on a social media platform which enables them to be recognised as opinion leaders to shape an information influence to their audience. 'Influencer marketing' industry gained prominence in 2018, when the photo sharing app Instagram crossed 1 billion users, subsequently with approximately 60,000 google search queries for 'influencer marketing' the same year. Influencer user profiles hold a unique selling point, or public personality that is unique and charismatic to the needs and wants of their target audience. SMI profiles advertise product information, latest promotions and regularly engage with their followers to maintain their online persona. Messages endorsed by social media influencers are often perceived as reliable and compelling, as a study conducted found 82% of followers were more inclined to follow the suggestions of their favorite influencer. This allows advertisers to leverage online user profiles and their audience rapport to target younger and niche audiences. According to a market survey, influencer marketing through social media profiles yields a return 11 times higher than traditional marketing, as they are more capable of communicating to a niche segment. Most popular influencers include sport starts such as Cristiano Ronaldo and Hollywood personalities such as Dwayne Johnson and Kylie Jenner each with over 200 million followers respectively. Ecommerce Online shopping or Ecommerce websites such as Amazon use information from a customer’s user profile and interests to generate a list of recommended items to shop. Recommendation algorithms analyse user demographic data, history, and favourite artists to compile suggestions. The store rapidly adapts to changing user needs and preferences, with generation of real time results required within half of a second. New profiles naturally have limited information for algorithms to analyse, and customer data of each interaction provides valuable information which is stored as a database linked with each individual profile. User profiles on ecommerce websites also serve to improve sales of sellers as individuals are recommend products that other "customers who bought this item also bought" to widen the selection of the buyer. A study conducted found that user profiles and recommendation algorithms have significant impact on related product sales and overall spending of an individual. A process known as "collaborative filtering" tries to analyse common products of interest for an individual on the basis of views expressed by other similar behaving profiles. Features such as product ratings, seller ratings and comments allow individual user profiles to contribute to recommendation algorithms, eliminate adverse selection and contribute to shaping an online marketplace adhering to Amazons zero tolerance policy for misleading products. Digital User Profiles Modern software and applications account for user profiles as a foundation on which a usable application is built. The structure and layout of an application such as its menus, features and controls are often derived from user’s selected settings and preferences. The origin of digital user profiles in computer systems was first initiated by Windows NT that held user settings and information in a separate environment variable named %USERPROFILE% and held the framework to a user’s profile root. Consequently, operating systems such as MacOS further accelerated prominence of user profiles in Mac OS X 10.0. Iterations since have been made with each operating system release with the aim to maximise user friendliness with the system. Features such as keyboard layouts, time zones, measurement units, synchronisation of different services and privacy preferences are made available during the setup of a user account on the computer Types of Accounts Administrator Administrator user profiles have complete access to the system and it's permissions. It is often the first user profile on a system by design, and is what allows other accounts to be created. However since the administrator account has no restrictions, they are highly vulnerable to malware and viruses, with potential to impact all other accounts. Guest Guest accounts allow other people access to your system with limited functionality and restrictions on modifying apps, settings and documents. Guest user accounts solve the concern of providing entire access of your account to other individuals. On MacOS, guest profiles don't require a password, however are completely controlled by parental controls on an Administrative account. Features such as automatic data & history deletion after a session is closed, allow guest accounts to save disk space once a user logs off. Guest accounts are most popularly used in public services such as libraries where individuals can request for a temporary account to complete work and research. Physical User Profiles Physical user profiles or legal documents such as passport and driving license are widely accepted as an official government record of an individual’s details. Much like digital user profiles, these documents outline primary characteristics of an individual such as their full legal name, birthdate, address portrait picture and a date of expiry. In recent history, many user profiles include a date of expiry or date of creation to indicate the legitimacy of the document and/or to encourage renewal to maintain accuracy of details. In some countries, it is a requirement to have a valid passport for six months after the planned leave from the country. National Identity Documents National identity documents are any documents issued by the official national authority, and are part of a government record. It is used to verify aspects of an individuals personal identity. Government issued documents include birth certificates, drivers licence, marriage certificate, national identity document and a social security card. The format of identity documents varies with each individual country. Controversies Cambridge Analytica Scandal 2018 The Cambridge Analytica Scandal, surfaced in 2018, raised global concerns over the privacy and the psychographic profiling algorithms that can be derived from user profiles. In 2013, Aleksandr Kogan of Cambridge Analytica developed an application "thisisyourdigitallife", which operated as a personality quiz, with the key caveat of connecting to an individual's Facebook user profile to operate. Many news sources documented Cambridge Analytica's exploitation of the Facebook data algorithm, where users not only gave the app permissions to access their "likes", but also information about their contacts and friends. The amassed data approximating 87 million Facebook users was harvested and exploited legally, to predict and influence the individual voting decisions in the 2016 presidential election. For many users it was unsettling that social media was being used to influence public opinion, leading to #deletefacebook campaigns on Twitter as a backlash to the scandal and Facebook's inability to guard privacy invasions. However, a research conducted on undergraduate students revealed many users believe that an exchange of personal information is necessary to participate in a social network and thus, despite the "breach of trust" (Zuckerberg, 2018) minimal users left the platform permanently.   In the months following Mark Zuckerberg's (founder) congressional hearing regarding the scandal, 74% of users made adjustments to their use of Facebook user profiles and changed their privacy settings. The Federal Trade Commission (FTC) legally required Facebook to acquire explicit consent of the user in use of their data, alongside disclosing appropriate information about the third party identity. #DeleteFacebook Movement Social media dissatisfaction can arise from challenges relating to misinformation, privacy and anti-social behaviours. 'Facebooklessness' a term coined by Ongun & Güder, 2013, considers the intentional distancing and isolation from Facebook. The #deletefacebook movement arose after the Cambridge Analytica Scandal 2018, which fuelled a lack of trust for the service and its ability to protect user information. Some reasons for intentional distancing was time-waste, reducing distraction, privacy concerns, seeking new relationships and coping with lost relationships. The movement away from Facebook is less of a one time gush, but a more steady trickle over the course. Some users adapted by deactivating their profiles (which can be reactivated later), others permanently and unretrievably deleting their accounts. For many users, deactivating was a reactionary and a temporary response to the scandal, as social needs and constant connectedness with relationships introduced imperatives to stay. References See also Internet privacy Identity documents Online identity Online identity management Personally identifiable information Web mining Social Media Identity management Knowledge representation Identity documents Information technology Games Software features
23907933
https://en.wikipedia.org/wiki/Ceres%20%28workstation%29
Ceres (workstation)
The Ceres Workstation was a workstation computer built by Niklaus Wirth's group at ETH Zurich in 1987. The central processing unit (CPU) is a National Semiconductor NS32000, and the operating system, named The Oberon System is written fully in the object-oriented programming language Oberon. It is an early example of an object-oriented operating system using garbage collection on the system level and a document centered approach for the user interface (UI), as envisaged later with OpenDoc. Ceres was a follow-up project to the Lilith workstation, based on AMD bit slicing technology and the programming language Modula-2. On the same hardware, Clemens Szyperski implemented as part of his Doctor of Philosophy (PhD) thesis, an operating system named ETHOS, which takes full advantage of object-oriented technologies. A Usenet posting by Szyperski says Oberon/F, renamed BlackBox Component Builder, incorporates many ETHOS ideas and principles. Links ETH Computer Science History Ceres-1 and Ceres-3 at the Computer History Museum, Mountain View, California, USA (see also its publications, especially pages 6 & 7 of Core 3.1) Hardware Description of the Workstation Ceres ETH Technical Report 70 Design of the Processor-Board for the Ceres-2 Workstation Hardware Description of the Workstation Ceres-3 ETH Technical Report 168 References Computer workstations
18167742
https://en.wikipedia.org/wiki/Twitterrific
Twitterrific
Twitterrific is a macOS and iOS client for the social networking site Twitter created by The Iconfactory and was the first Twitter desktop client to come to macOS. It lets users view "tweets" or micro-blog posts on the Twitter website in real time as well as publish their own. Twitterrific is closed source software. Features The program's main window uses a translucent black theme similar to certain palettes used in Aperture, iPhoto and other Apple Inc. software. Users may choose to view the full public timeline or just the friends feed. Users can also click on links to view the poster's profile or mark a tweet as a favorite. Twitterrific also provides functionality to upload images and videos for posting on Twitter. History As of version 2.1, Twitterrific supports Growl notifications, enhanced AppleScript capabilities and can be used with other sites or services that use the Twitter API. Version 3 changed Twitterrific into advertising supported shareware; every hour an ad is refreshed to the top of the list. Users who buy the program receive no ads. Other changes in version 3 mostly added compatibility with Mac OS X 10.5 and incorporated newer Twitter features like direct messaging. The iOS version of Twitterrific won the 2008 Apple Design Award for Best iPhone Social Networking Application. On April 1, 2010, The Iconfactory released Twitterrific for iPad (Version 1.0), ready for the iPad's US launch on April 3. On June 24, a version of Twitterrific was launched (version 3.0) that was universally compatible with the iPhone, iPod touch and the iPad. On February 14, 2017, a Kickstarter project was launched by The Iconfactory to try and revive the Twitterrific for Mac application. On October 10, 2017, the Mac application received a 5.0 update and was added to the Mac App Store. On June 13, 2019, the iOS version 6.0 was announced. It disregarded previous in-app purchases. Users who had previously paid not to see ads were shown ads again. Iconfactory regards Twitterific 6.0 as a new app but does not give their existing users the option of staying on Twitterific 5.x. References External links Iconfactory : Software : Twitterrific MacOS Internet software The Iconfactory Twitter services and applications Shareware 2007 software IOS software Microblogging software
4991526
https://en.wikipedia.org/wiki/Lucan%20Irish
Lucan Irish
The Lucan Irish are a Junior ice hockey team based in Lucan Biddulph, Ontario. They play in the Provincial Junior Hockey League of the Ontario Hockey Association. History The Irish were founded in 1968 as the Lucan Irish Six, named after the Black Donnellys. In 1982, the Irish won the Western Ontario Junior D Hockey League's championship. They went on to defeat the Langton Thunderbirds of the Southern Counties Junior D Hockey League 4-games-to-1 to win the OHA Cup as provincial champions. In 1987, the Irish again won the Western Junior D's playoff title. They defeated the Tavistock Braves of the Southern Counties Junior D league 4-games-to-3 to win the OHA Cup for the second time as provincial champions. In 1988, all remaining Junior D leagues were consolidated into the Western Junior league. In 1991, the league dropped the Junior D label and became the OHA Junior Development League. Starting in 1988, the Irish set out to prove they were the "cream of the crop" in this new league. After the 1988-89 season, the Irish made it all the way to the league finals, but were thwarted by the Lambeth Lancers. In 1990, 1991, and 1992, the same scenario repeated over and over again. The Irish would finish highly ranked in their league, would make it all the way to the finals and three years in a row they would meet the Thamesford Trojans who beat them each time. In 1999, after finishing second overall in the league standings the Irish fought all the way back to the league final. The Irish defeated the Wellesley Applejacks 4-games-to-3 to win their third ever OHA Cup. In 2006, the Irish twelfth overall in the OHAJDL standings. As low seed in the standings, the Irish were figured to lose out in the early rounds of the league playoffs. They entered the conference quarter-final against the Exeter Hawks and walked right through them with a 4-game-sweep. The conference semi-final had the same result against the North Middlesex Stars. The conference final was against their rivals, the Thamesford Trojans, whom they defeated in five games to enter into the league finals for the first time in seven seasons. In the final, they met the Delhi Travellers. A tight series, the Irish were not to be denied as the defeated the Travellers 4-games-to-2 to win their fourth OHA Cup. After the 2006-07 season, the OHAJDL was disbanded and the Southern Ontario Junior Hockey League was formed. In the 2006-07 season , they finished seventh overall in the league. In the league's conference quarter-finals, the Irish met their match in the Thamesford Trojans who defeated them 4-games-to-1. Season-by-season standings Playoffs 1982 Won league, Won OHA championship Lucan Irish defeated Langton Thunderbirds 4-games-to-1 in OHA championship 1987 Won league, Won OHA championship Lucan Irish defeated Tavistock Braves 4-games-to-3 in OHA championship 1989 Lost final Lambeth Lancers defeated Lucan Irish 4-games-to-1 in final 1990 Lost final Thamesford Trojans defeated Lucan Irish 4-games-to-2 in final 1991 Lost final Thamesford Trojans defeated Lucan Irish 4-games-to-1 in final 1992 Lost final Thamesford Trojans defeated Lucan Irish 4-games-to-0 in final 1999 Won League Lucan Irish defeated Wellesley Applejacks 4-games-to-3 in final 2006 Won League Lucan Irish defeated Exeter Hawks 4-games-to-none in conf. quarter-final Lucan Irish defeated North Middlesex Stars 4-games-to-none in conf. semi-final Lucan Irish defeated Thamesford Trojans 4-games-to-1 in conf. final Lucan Irish defeated Delhi Travellers 4-games-to-2 in final 2007 Lost Conference quarter-final Thamesford Trojans defeated Lucan Irish 4-games-to-1 in conf. quarter-final 2008 Lost Conference quarter-final Thamesford Trojans defeated Lucan Irish 4-games-to-3 in conf. quarter-final Notable alumni Matt Read External links Irish Homepage Southern Ontario Junior Hockey League teams Ice hockey teams in Ontario
4977555
https://en.wikipedia.org/wiki/AdvFS
AdvFS
AdvFS, also known as Tru64 UNIX Advanced File System, is a file system developed in the late 1980s to mid-1990s by Digital Equipment Corporation for their OSF/1 version of the Unix operating system (later Digital UNIX/Tru64 UNIX). In June 2008, it was released as free software under the GPL-2.0-only license. AdvFS has been used in high-availability systems where fast recovery from downtime is essential. Functionality AdvFS uses a relatively advanced concept of a storage pool (called a file domain) and of logical file systems (called file sets). A file domain is composed of any number of block devices, which could be partitions, LVM or LSM devices. A file set is a logical file system created in a single file domain. Administrators can add or remove volumes from an active file domain, providing that there is enough space on the remaining file domain, in case of removal. This was one of the trickier original features to implement because all data or metadata residing on the disk being removed had to first be migrated, online, to other disks, prior to removal. File sets can be balanced, meaning that file content of file sets be balanced across physical volumes. Particular files in a file set can be striped across available volumes. Administrators can take a snapshot (or clone) of any active or inactive file set. This allows for easy on-line backups. Another feature allows administrators to add or remove block devices from a file domain, while the file domain has active users. This add/remove feature allows migration to larger devices or migration from potentially failing hardware without a system shutdown. Features Its features include: a journal to allow for fast crash recovery undeletion support high performance dynamic structure that allows an administrator to manage the file system on the fly on the fly creation of snapshots defragmentation while the domain has active users Under Linux, AdvFS supports an additional ‘’syncv’’ system call to atomically commit changes to multiple files. History AdvFS, also known as Tru64 UNIX Advanced File System, was developed by Digital Equipment Corporation engineers in the late 1980s to mid-1990s in Bellevue, WA (DECwest). They had previously worked on the earlier (cancelled) MICA and OZIX projects there. It was first delivered on the DEC OSF/1 system (later Digital UNIX/Tru64 UNIX). Over time, development moved to teams located in Bellevue, WA and Nashua, NH. Versions were always one version number behind the operating system version. Thus, DEC OSF/1 v3.2 had AdvFS v2.x, Digital UNIX 4.0 had AdvFS v3.x and Tru64 UNIX 5.x had AdvFS v4.x. It is generally considered that only AdvFS v4 had matured to production level stability, with a sufficient set of tools to get administrators out of any kind of trouble. The original team had enough confidence in its log based recovery to release it without an "fsck" style recovery utility on the assumption that the file system journal would always be allocated on mirrored drives. In 1996, Lee and Thekkath described the use of AdvFS on top of a novel disk virtualisation layer known as Petal. In a later paper, Thekkath et al. describe their own file system (Frangipani) built on top of Petal and compare it to the performance of AdvFS running on the same storage layer. Shapiro and Miller compared the performance of files stored in AdvFS to Oracle RDBMS version 7.3.4 BLOB storage. Compaq Sierra Parallel File System (PFS) created a cluster file system based on multiple local AdvFS filesystems; testing carried out at Lawrence Livermore National Laboratory (LLNL) in 2000–2001 found that while the underlying AdvFS filesystem had adequate performance (albeit with high CPU utilisation), the PFS clustering layer on top of it performed poorly. On June 23, 2008, its source code was released by Hewlett-Packard under the GPL-2.0-only license (instead of the recently released GPLv3) at SourceForge in order to be compatible with the also GPL-2.0-only licensed Linux kernel. References External links Source code at Sourceforge.net Digital Equipment Corporation Disk file systems Formerly proprietary software
4928523
https://en.wikipedia.org/wiki/TMM
TMM
TMM may refer to: Science Transfer-matrix method, a statistical mechanics method Transfer-matrix method (optics), a method to describe wave propagation through stratified media Trimethylenemethane, a reactive organic compound and a ligand in organometallic chemistry Software and business Tell Me More (software), French language-learning software from Auralog Testing Maturity Model, a software process improvement model Too Much Media, an American software company based in New Jersey Traffic Management Microkernel, a product of F5 Networks Translation memory manager, a software program to aid human translators Other uses Tell Me More, an American radio show on National Public Radio hosted by Michel Martin Texas Memorial Museum, a museum at the University of Texas at Austin in the United States Textbook of Military Medicine, a U.S. Army publication Theresa May, a Prime Minister of the United Kingdom, from her full name Theresa Mary May TMM, the former ISO 4217 code of the Turkmenistani manat, the currency of Turkmenistan TMM-1 mine, an anti-tank landmine
8920000
https://en.wikipedia.org/wiki/TopoFusion
TopoFusion
TopoFusion GPS Mapping software designed to plan and analyze trails using topographic maps and GPS tracks. History The software was created in 2002 by two brothers who were outdoor bikepacking enthusiasts and felt software could help them plan better trails. They developed the first version of the software in 2002 and one included it as part of his doctorate dissertation on GPS Driven Trail Simulation and Network Production. In 2004 the developers and one other jointly presented the paper Digital Trail Libraries which illustrated some of the graph theory algorithms used by the software. the software remains supported with refined functionality and improved support for additional maps and GPS Devices. Features The software was designed to plan and analyze trails. When used for planning proposed routes may be planned and checked against different maps, and the result(s) downloaded to a GPS tracking device. Topofusion is particularly noted for eased of switch and combining maps and for capability of simultaneously managing multiple trails. After a trail has been executed the resultant GPS log can be uploaded to TopoFusion and the actual route analyzed with the addition of any photographic images recorded on route. The product is marketed as a fully featured 'professional version and a more basic version with reduced functionality at lower cost. A fully featured trial version which is not time limited is available which restricts usability by watermarking map display tiles by overlaying the word 'DEMO'. The software is available directly Microsoft Windows only, however TopoFusion has claimed users have reported success using VMWare Fusion and Parallels emulation on Mac OS. Applications TopoFusion has been found useful by those engaged in the sport of geocaching. The software has been used in assisting analysis of GPS routes. A survey reported in 2004 of GPS tracking of motorists visiting the Acadia National Park in Maine, United States was assisted by use of Topofusion to review the scenes visited. It has also been used in studies of agriculture transportation logistics. TopoFusion can also assist in determining where photographs have been taken on a trail and can geocoded photo the image or tag it onto a map. For this to be successful the digital camera's time must be synchronized with the GPS unit time, and both the GPS track and digital images made available to Topofusion. The time when the image was taken can then be matched to the time on the GPS log and this enables the image to be enhanced with geocode fields when Real-time geotagging was not available when the image was taken. TopoFusion can also optionally annotate maps with images. References External links Official website Photo software Wireless locating Global Positioning System Plotting software
40663883
https://en.wikipedia.org/wiki/Ryan%20Ackroyd
Ryan Ackroyd
Ryan Ackroyd, Kayla and lolspoon, is a former black hat hacker who was one of the six core members of the hacking group "LulzSec" during its 50-day spree of attacks from 6 May 2011 until 26 June 2011. At the time, Ackroyd posed as a female hacker named "Kayla" and was responsible for the penetration of multiple military and government domains and many high profile intrusions into the networks of Gawker in December 2010, HBGaryFederal in 2011, PBS, Sony, Infragard Atlanta, Fox Entertainment and others. He eventually served 30 months in prison for his hacking activities. After his release from jail, Ackroyd publicly stated during "a conversation with Lulzsec" that he believes Anonymous, other activists and like-minded should come together and attempt to change issues legally. In December 2014, he gave his first ever lecture in an over-capacity lecture auditorium at Sheffield Hallam University for over 200 students, where he spoke about Lulzsec and their "50 days of lulz". On his Twitter account, Ackroyd vowed to help the security of the systems he once breached, stating that he would "help secure and defend the systems in hopes we can all learn from each other, should I be given the chance to do so". He also added "For me, it wasn't about stealing people's information, I just wanted to show people how flawed their so-called secure systems are. People need to fix their stuff… I sent countless emails to companies and even government organisations and I was ignored. I soon realised I'd have to show them why they should secure themselves before they would listen. I'm like Jiminy Cricket, only when you don't listen I'd hit you really hard with my tiny umbrella so you'd do the right thing," he joked. History Ackroyd is said to have LLI (low latent inhibition) which is why he is driven by wanting to learn how everything works. He was an infantry soldier who served in Iraq where he specialised in encrypting military communications and systems. Rise to prominence In 2011, Ackroyd was part of the small group of hackers who breached the security of HBGaryFederal.com through an SQL injection and is said to have social engineered the administrator of rootkit.com, HBGary's CEO's personal website to gain root access to their entire systems. During the rise of the group "LulzSec", Ackroyd is said to be its most talented hacker, doing much of the security penetration along with Hector Monsegur. He hacked into fox.com, UK Bank Machines, Sony, PBS, the FBI, Bethesda Softworks, Senate.gov, Arizona Department of Public Safety, AT&T, AOL, Navy.mil, Infragard Atlanta, NATO Bookshops and others during LulzSec's infamous "50 Days of Lulz". Ackroyd is responsible for the hack on Booz Allen, where Edward Snowden was an employee. He was also responsible for the hack into Gawker Media's computer networks in December 2010, in retaliation to what Ackroyd perceived to be behaviour condescending of Anonymous and other affiliated hackers. During this time, Ackroyd hacked into hundreds of military domains to show vulnerabilities were in excess even in the most sensitive areas. Arrest and legal proceedings On 1 September 2011, Ackroyd's "lolspoon" Twitter feed went silent for the last time, amidst announcements that the hacker was arrested in Mexborough, South Yorkshire. It became clear that Ackroyd was not, in fact, a girl, but rather a 24-year-old man with prior military service in the British Army serving in Iraq. He was released on bail with fellow co-defendants Tflow and Topiary. Ackroyd was accused of installing a trip-wire which activated as soon as agents moved his computer upon raiding his home, which clean erased all data on his system. On 9 April 2013, Ackroyd appeared in court for the final time where he was branded "highly forensically aware" by the court. Ackroyd pleaded not guilty to Distributed Denial of Service (DDoS) attacks carried out under the LulzSec banner during its "AntiSec" campaign, but pleaded guilty to violating the computer misuse act. Ackroyd served a 30-month prison sentence in England. After release Ackroyd was an Associate Lecturer at Sheffield Hallam University and was also enrolled on a master's degree in information systems security. He is now the Lead Penetration Tester at The Hut Group. References Living people British computer criminals Anonymous (hacker group) activists Year of birth missing (living people) Hacktivists
52991251
https://en.wikipedia.org/wiki/Nitro%20Zeus
Nitro Zeus
Nitro Zeus is the project name for a well funded comprehensive cyber attack plan created as a mitigation strategy after the Stuxnet malware campaign and its aftermath. Unlike Stuxnet, that was loaded onto a system after the design phase to affect its proper operation, Nitro Zeus's objectives are built into a system during the design phase unbeknownst to the system users. This built-in feature allows a more assured and effective cyber attack against the system's users. The information about its existence was raised during research and interviews carried out by Alex Gibney for his Zero Days documentary film. The proposed long term widespread infiltration of major Iranian systems would disrupt and degrade communications, power grid, and other vital systems as desired by the cyber attackers. This was to be achieved by electronic implants in Iranian computer networks. The project was seen as one pathway in alternatives to full-scale war. See also Kill Switch Backdoor (computing) Operation Olympic Games References Malware Cyberwarfare Computer hardware
47404182
https://en.wikipedia.org/wiki/Caroline%20Means
Caroline Means
Caroline Means (; born March 16, 1993) is a retired American soccer goalkeeper. Before announcing her retirement in 2018, Caroline played for Sky Blue FC, Seattle Reign FC and Orlando Pride in the National Women's Soccer League. She also represented the United States on the under-20, under-18, under-17, and under-15 national teams. Early life Born to parents David and Kelly Stanley, in Oklahoma City, Caroline grew up in Missouri and attended Lee's Summit North High School in Lee's Summit, Missouri from 2007 to 2011 where she earned two consecutive Goalkeeper of the Year awards from the Missouri State High School Soccer Coaches Association. She also played volleyball, basketball and track for the school. She played club soccer for KCFC Intensity, the Scream and her childhood and first club team, the Pink Panthers. She won five state titles. Appeared in five Regional playoffs. And In 2011, she won a national tournament with KCFC, making three saves during a shootout to win the championship. USC Trojans, 2012–2014 After playing one season for the University of Missouri, she transferred to the University of Southern California where she played for the Trojans from 2012 to 2014. During her first season with the Trojans, she started in 17 of the 18 games in which she played. She made 95 saves allowing 32 goals. During her junior year, she started all 20 games and made 78 saves allowing 25 goals. Her senior year was by far her best in college. Appearing and starting in every game as a team Captain. She was the only senior to start. She set game records of most saves per match. And saved a PK against Notre Dame to beat them on the road. Playing career Club Originally training with the Seattle Reign FC as an amateur player, Stanley signed with the team in July 2015. She made her first appearance for the club on August 1, 2015 during an away match against the Boston Breakers. She signed with Sky Blue FC in March 2016. After signing with the Orlando Pride following an injury to their backup goalkeeper Aubrey Bledsoe, she was substituted in for Ashlyn Harris early in the first half after Harris came down with a non-contact injury. This substitution happened one day after joining with her new club for the first time in Seattle, and she recorded several saves that contributed to a hard-fought road point against the Seattle Reign. On August 14, 2017, Orlando cut Means to make roster room for the return of Ashlyn Harris. She officially announced her retirement from soccer on January 7, 2018 at age 24. International She has represented the United States on various youth national teams at the under-20, under-18, under-17, and under-15 levels. She played at the CONCACAF U-17 World Cup qualifiers in Florida and saved a penalty kick to beat Costa Rica. Post-Playing career On February 27, 2018, Means announced that she had accepted the position as Goalkeeper Coach for the University of Tulsa Golden Hurricane Women's Soccer Team becoming the program's first full-time Goalkeeper Coach. Personal life She is married to Baltimore Orioles pitcher John Means. They welcomed their first child, a son, in 2020. Means is a Christian. Means is an ambassador for The Young and Brave Foundation, a non-profit organization that supports children with cancer. References External links Orlando Pride player profile USC Trojans player profile Pro Skills Soccer coach profile Living people 1993 births American women's soccer players OL Reign players National Women's Soccer League players Soccer players from Missouri USC Trojans women's soccer players Women's association football goalkeepers Missouri Tigers women's soccer players NJ/NY Gotham FC players Orlando Pride players
58084610
https://en.wikipedia.org/wiki/Pcb-rnd
Pcb-rnd
pcb-rnd is a modular and compact (core under 60k SLOC, plugins at 100k SLOC) software application used for layout design of electrical circuits. Pcb-rnd is used professionally as well as in universities. Pre-built packages are available on multiple operating systems. The software focuses on multiple file format support, scripting, multiple font support, a query language and command line support for batch processing and automation. The software provides user interfaces for command line, gtk2+gdk, gtk2+gl, and motif supporting multiple GUIs with the same thing for every interface. History pcb-rnd was originally developed from a friendly fork of the geda PCB project. In 2020 pcb-rnd was funded through NGI0 PET as a part of the European Commission's Next Generation Internet program. See also Comparison of EDA Software References External links Engineering software that uses GTK Free software programmed in C Free electronic design automation software Electronic design automation software for Linux
19348017
https://en.wikipedia.org/wiki/Map%20Overlay%20and%20Statistical%20System
Map Overlay and Statistical System
The Map Overlay and Statistical System (MOSS), is a GIS software technology. Development of MOSS began in late 1977 and was first deployed for use in 1979. MOSS represents a very early public domain, open source GIS development - predating the better known GRASS by 5 years. MOSS utilized a polygon based data structure in which point, line, and polygon features could all be stored in the same file. The user interacted with MOSS via a command line interface. History In the middle 1970s, coal-mining activities required Federal agencies to evaluate the impacts of strip mine development on wildlife and wildlife habitat. They were further tasked with evaluating and making recommendations regarding habitat mitigation. In 1976, the US Fish and Wildlife Service (FWS) issued a Request For Proposals (RFP) for developing a Geographic Information System [GIS] for environment impact and habitat mitigation studies. The scope of the project included completing a User Needs assessment, developing a GIS functional scope, evaluating existing GIS technologies, and making recommendations to the USFWS as to the appropriate course of action for the development and deployment of GIS technology. In late 1976, the contract was awarded to the Federation of Rocky Mountain States, a not for profit organization that eventually evolved into the Western Governors’ Policy Office. For the first six months of 1977, the project team worked on two tasks: A User Needs Assessment and an Inventory of Existing GIS technology. The needs assessment involved interviewing wildlife biologists, natural resources planners, and other professionals that would be involved in wildlife habitat definition and habitat mitigation. The results of the assessment were published in the summer of 1977. Concurrently, Carl Reed did an inventory of existing public domain and commercial GIS technology. Approximately 70 different mapping and GIS software packages were identified. Of these, 54 had enough documentation and basic required functionality to warrant further analysis in terms of matching GIS functionality against user requirements. This document is a valuable historical document as it has information and details of systems long extinct and forgotten. The evaluation resulted in the determination that no existing GIS capability provided even a fraction of the functional capability required to meet user needs. Therefore, the decision was made to design and program a new interactive GIS application that used existing publicly available software whenever possible. Using the user requirements as the design driver, the design of MOSS began during the summer of 1977. Once the group agreed on the design, programming started. The development environment was a CDC mainframe running the Kronos operating system. Fortran IV was the development language. Graphics presentation and code development was done on a Tektronix 4010. Initial programming was completed in 1978 In 1978, MOSS was used in a Pilot Project in 1978 totest the validity of using the new MOSS software in a real world FWS habitat mitigation project. The pilot project used vector and raster map data digitized from USGS base maps, from aerial imagery, and maps provided by other agencies. The Pilot project was successful and allowed additional enhancements and bug fixes to be accomplished for deploying MOSS for production use. By 1979, a user accessible version of MOSS was available on the CDC mainframe. In late 1979, the FWS purchased a Data General computer (AOS Operating System) and required MOSS to be ported from the CDC mainframe to the DG minicomputer. This work was completed in the summer of 1980. By the middle of 1980, the MOSS software suite was ready for production use. Once installed, operational, and properly documented at the WELUT facility in Fort Collins Colorado, an extensive technology transfer and training activity began. Within a few years, numerous other Federal agencies were using MOSS for a variety of projects. By 1983, MOSS was being used in the Bureau of Indian Affairs, multiple Bureau of Land Management State Offices, the Bureau of Reclamation, National Park Service, US Army Engineering Topographic Labs, Fish and Wildlife Service, and numerous State, Local and University organizations. The first MOSS Users Workshop was held in 1983 and had about 30 attendees. The second users workshop was held in Denver in 1984 with almost 150 attendees. Architecture MOSS allowed the user to store both vector and raster in the same geospatial database. The vector data could be points, lines, or polygons. MOSS utilized what at the time was referred to as a "full polygon" representation. In a full polygon representation, each polygon vertex shared with another polygon. Polygons could have islands (holes). Raster data were stored as pixels. The early versions of MOSS only allowed up to 32,000 coordinate pairs per line or polygon feature. This was due to Fortran array addressing issues. Raster images could be no larger than 32,000 pixels per row. Each map in a MOSS database could have up to 32,000 features. There was no limit on the number of maps in the database. Each map had a map header that contained a variety of metadata, such as the coordinate reference system (projection), date of creation, owner, data of last update, description, and so forth. Metadata was "searchable". References Comparison of Selected Operational Capabilities of 54 Geographic Information Systems. FRMS, 1977 (Gropper, Hamill, Reed, Salmen). Under contract Number 14160082155. Evaluation and Selection of Existing GIS Software for The U.S. FISH AND WILDLIFE GIS. Carl Reed, AutoCarto 3, 1978. Logical Capabilities of the (USFWS) GIS. FRMS, 1978. (Reed, Hammill, Gropper, Salmen). Not available online. Available from the lead author. U.S. Fish & Wildlife Service. 1976. WELUT Western Energy and Land Use Team. Fort Collins, CO: Brochure Second Annual MOSS Users Workshop. 1985. Denver Colorado. Proceedings prepared by DOI BLM. Not available online. Available from the BLM. User Needs Assessment for an Operational GIS within the US Fish and Wildlife. FRMS,1977. (Gropper, Hamill, Nez, Reed, Salmen) Under contract Number 14160082155. External links Map Overlay and Statistical System online resources from the MOSS Heritage team MOSS Code repository (Open Access) Zenodo, 2021 Reed, Carl N III, Katz, Sol, Frosh, Randy, Davidson, John, Hunter, Anne, & Lee, John. (2021). Open Source GIS history from the OSGeo Foundation GIS software
49242352
https://en.wikipedia.org/wiki/AlphaGo
AlphaGo
AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind Technologies a subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules. AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play. A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration. In October 2015, in a match against Fan Hui, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board. In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap. Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo, directed by Greg Kohs. The win by AlphaGo was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016. At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association. After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas. The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successor AlphaZero is currently perceived as the world's top player in Go as well as possibly in chess. History Go is considered much more difficult for computers to win than other games such as chess, because its much larger branching factor makes it prohibitively difficult to use traditional AI methods such as alpha–beta pruning, tree traversal and heuristic search. Almost two decades after IBM's computer Deep Blue beat world chess champion Garry Kasparov in the 1997 match, the strongest Go programs using artificial intelligence techniques only reached about amateur 5-dan level, and still could not beat a professional Go player without a handicap. In 2012, the software program Zen, running on a four PC cluster, beat Masaki Takemiya (9p) twice at five- and four-stone handicaps. In 2013, Crazy Stone beat Yoshio Ishida (9p) at a four-stone handicap. According to DeepMind's David Silver, the AlphaGo research project was formed around 2014 to test how well a neural network using deep learning can compete at Go. AlphaGo represents a significant improvement over previous Go programs. In 500 games against other available Go programs, including Crazy Stone and Zen, AlphaGo running on a single computer won all but one. In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer. The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs. Match against Fan Hui In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui, a 2-dan (out of 9 dan possible) professional, five to zero. This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap. The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature describing the algorithms used. Match against Lee Sedol AlphaGo played South Korean professional Go player Lee Sedol, ranked 9-dan, one of the best players at Go, with five games taking place at the Four Seasons Hotel in Seoul, South Korea on 9, 10, 12, 13, and 15 March 2016, which were video-streamed live. Out of five games, AlphaGo won four games and Lee won the fourth game which made him recorded as the only human player who beat AlphaGo in all of its 74 official games. AlphaGo ran on Google's cloud computing with its servers located in the United States. The match used Chinese rules with a 7.5-point komi, and each side had two hours of thinking time plus three 60-second byoyomi periods. The version of AlphaGo playing against Lee used a similar amount of computing power as was used in the Fan Hui match. The Economist reported that it used 1,920 CPUs and 280 GPUs. At the time of play, Lee Sedol had the second-highest number of Go international championship victories in the world after South Korean player Lee Changho who kept the world championship title for 16 years. Since there is no single official method of ranking in international Go, the rankings may vary among the sources. While he was ranked top sometimes, some sources ranked Lee Sedol as the fourth-best player in the world at the time. AlphaGo was not specifically trained to face Lee nor was designed to compete with any specific human players. The first three games were won by AlphaGo following resignations by Lee. However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation. The prize was US$1 million. Since AlphaGo won four out of five and thus the series, the prize will be donated to charities, including UNICEF. Lee Sedol received $150,000 for participating in all five games and an additional $20,000 for his win in Game 4. In June 2016, at a presentation held at a university in the Netherlands, Aja Huang, one of the Deep Mind team, revealed that they had patched the logical weakness that occurred during the 4th game of the match between AlphaGo and Lee, and that after move 78 (which was dubbed the "divine move" by many professionals), it would play as intended and maintain Black's advantage. Before move 78, AlphaGo was leading throughout the game, but Lee's move caused the program's computing powers to be diverted and confused. Huang explained that AlphaGo's policy network of finding the most accurate move order and continuation did not precisely guide AlphaGo to make the correct continuation after move 78, since its value network did not determine Lee's 78th move as being the most likely, and therefore when the move was made AlphaGo could not make the right adjustment to the logical continuation. Sixty online games On 29 December 2016, a new account on the Tygem server named "Magister" (shown as 'Magist' at the server's Chinese version) from South Korea began to play games with professional players. It changed its account name to "Master" on 30 December, then moved to the FoxGo server on 1 January 2017. On 4 January, DeepMind confirmed that the "Magister" and the "Master" were both played by an updated version of AlphaGo, called AlphaGo Master. As of 5 January 2017, AlphaGo Master's online record was 60 wins and 0 losses, including three victories over Go's top-ranked player, Ke Jie, who had been quietly briefed in advance that Master was a version of AlphaGo. After losing to Master, Gu Li offered a bounty of 100,000 yuan (US$14,400) to the first human player who could defeat Master. Master played at the pace of 10 games per day. Many quickly suspected it to be an AI player due to little or no resting between games. Its adversaries included many world champions such as Ke Jie, Park Jeong-hwan, Yuta Iyama, Tuo Jiaxi, Mi Yuting, Shi Yue, Chen Yaoye, Li Qincheng, Gu Li, Chang Hao, Tang Weixing, Fan Tingyu, Zhou Ruiyang, Jiang Weijie, Chou Chun-hsun, Kim Ji-seok, Kang Dong-yun, Park Yeong-hun, and Won Seong-jin; national champions or world championship runners-up such as Lian Xiao, Tan Xiao, Meng Tailing, Dang Yifei, Huang Yunsong, Yang Dingxin, Gu Zihao, Shin Jinseo, Cho Han-seung, and An Sungjoon. All 60 games except one were fast-paced games with three 20 or 30 seconds byo-yomi. Master offered to extend the byo-yomi to one minute when playing with Nie Weiping in consideration of his age. After winning its 59th game Master revealed itself in the chatroom to be controlled by Dr. Aja Huang of the DeepMind team, then changed its nationality to the United Kingdom. After these games were completed, the co-founder of Google DeepMind, Demis Hassabis, said in a tweet, "we're looking forward to playing some official, full-length games later [2017] in collaboration with Go organizations and experts". Go experts were impressed by the program's performance and its nonhuman play style; Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go." Future of Go Summit In the Future of Go Summit held in Wuzhen in May 2017, AlphaGo Master played three games with Ke Jie, the world No.1 ranked player, as well as two games with several top Chinese professionals, one pair Go game and one against a collaborating team of five human players. Google DeepMind offered 1.5 million dollar winner prizes for the three-game match between Ke Jie and Master while the losing side took 300,000 dollars. Master won all three games against Ke Jie, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association. After winning its three-game match against Ke Jie, the top-rated world Go player, AlphaGo retired. DeepMind also disbanded the team that worked on the game to focus on AI research in other areas. After the Summit, Deepmind published 50 full length AlphaGo vs AlphaGo matches, as a gift to the Go community. AlphaGo Zero and AlphaZero AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version without human data and stronger than any previous human-champion-defeating version. By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days. In a paper released on arXiv on 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go by defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case. Teaching tool On 11 December 2017, DeepMind released AlphaGo teaching tool on its website to analyze winning rates of different Go openings as calculated by AlphaGo Master. The teaching tool collects 6,000 Go openings from 230,000 human games each analyzed with 10,000,000 simulations by AlphaGo Master. Many of the openings include human move suggestions. Versions An early version of AlphaGo was tested on hardware with various numbers of CPUs and GPUs, running in asynchronous or distributed mode. Two seconds of thinking time was given to each move. The resulting Elo ratings are listed below. In the matches with more time per move higher ratings are achieved. In May 2016, Google unveiled its own proprietary hardware "tensor processing units", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol. In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master, and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger. Algorithm As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology. A limited amount of game-specific feature detection pre-processing (for example, to highlight whether a move matches a nakade pattern) is applied to the input before it is sent to the neural networks. The networks are convolutional neural networks with 12 layers, trained by reinforcement learning. The system's neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play. To avoid "disrespectfully" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%. Style of play Toby Manning, the match referee for AlphaGo vs. Fan Hui, has described the program's style as "conservative". AlphaGo's playing style strongly favours greater probability of winning by fewer points over lesser probability of winning by more points. Its strategy of maximising its probability of winning is distinct from what human players tend to do which is to maximise territorial gains, and explains some of its odd-looking moves. It makes a lot of opening moves that have never or seldom been made by humans. It likes to use shoulder hits, especially if the opponent is over concentrated. Responses to 2016 victory AI community AlphaGo's March 2016 victory was a major milestone in artificial intelligence research. Go had previously been regarded as a hard problem in machine learning that was expected to be out of reach for the technology of the time. Most experts thought a Go program as powerful as AlphaGo was at least five years away; some experts thought that it would take at least another decade before computers would beat Go champions. Most observers at the beginning of the 2016 matches expected Lee to beat AlphaGo. With games such as checkers (that has been "solved" by the Chinook draughts player team), chess, and now Go won by computers, victories at popular board games can no longer serve as major milestones for artificial intelligence in the way that they used to. Deep Blue's Murray Campbell called AlphaGo's victory "the end of an era... board games are more or less done and it's time to move on." When compared with Deep Blue or Watson, AlphaGo's underlying algorithms are potentially more general-purpose and may be evidence that the scientific community is making progress towards artificial general intelligence. Some commentators believe AlphaGo's victory makes for a good opportunity for society to start preparing for the possible future impact of machines with general purpose intelligence. As noted by entrepreneur Guy Suter, AlphaGo only knows how to play Go and doesn't possess general-purpose intelligence; "[It] couldn't just wake up one morning and decide it wants to learn how to use firearms." AI researcher Stuart Russell said that AI systems such as AlphaGo have progressed quicker and become more powerful than expected, and we must therefore develop methods to ensure they "remain under human control". Some scholars, such as Stephen Hawking, warned (in May 2015 before the matches) that some future self-improving AI could gain actual general intelligence, leading to an unexpected AI takeover; other scholars disagree: AI expert Jean-Gabriel Ganascia believes that "Things like 'common sense'... may never be reproducible", and says "I don't see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration." Computer scientist Richard Sutton said "I don't think people should be scared... but I do think people should be paying attention." In China, AlphaGo was a "Sputnik moment" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence. In 2017, the DeepMind AlphaGo team received the inaugural IJCAI Marvin Minsky medal for Outstanding Achievements in AI. "AlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise", said Professor Michael Wooldridge, Chair of the IJCAI Awards Committee. "What particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with. It's a breathtaking demonstration of contemporary AI, and we are delighted to be able to recognise it with this award." Go community Go is a popular game in China, Japan and Korea, and the 2016 matches were watched by perhaps a hundred million people worldwide. Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers, but made sense in hindsight: "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself." AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match where a computer had beaten a Go professional for the first time ever without the advantage of a handicap. The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said "Last night was very gloomy... Many people drank alcohol." The Korea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress. China's Ke Jie, an 18-year-old generally recognized as the world's best Go player at the time, initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would "copy my style". As the matches progressed, Ke Jie went back and forth, stating that "it is highly likely that I (could) lose" after analysing the first three matches, but regaining confidence after AlphaGo displayed flaws in the fourth match. Toby Manning, the referee of AlphaGo's match against Fan Hui, and Hajin Lee, secretary general of the International Go Federation, both reason that in the future, Go players will get help from computers to learn what they have done wrong in games and improve their skills. After game two, Lee said he felt "speechless": "From the very beginning of the match, I could never manage an upper hand for one single move. It was AlphaGo's total victory." Lee apologized for his losses, stating after game three that "I misjudged the capabilities of AlphaGo and felt powerless." He emphasized that the defeat was "Lee Se-dol's defeat" and "not a defeat of mankind". Lee said his eventual loss to a machine was "inevitable" but stated that "robots will never understand the beauty of the game the same way that we humans do." Lee called his game four victory a "priceless win that I (would) not exchange for anything." Similar systems Facebook has also been working on its own Go-playing system darkforest, also based on combining machine learning and Monte Carlo tree search. Although a strong player against other computer Go programs, as of early 2016, it had not yet defeated a professional human player. Darkforest has lost to CrazyStone and Zen and is estimated to be of similar strength to CrazyStone and Zen. DeepZenGo, a system developed with support from video-sharing website Dwango and the University of Tokyo, lost 2–1 in November 2016 to Go master Cho Chikun, who holds the record for the largest number of Go title wins in Japan. A 2018 paper in Nature cited AlphaGo's approach as the basis for a new means of computing potential pharmaceutical drug molecules. Example game AlphaGo Master (white) v. Tang Weixing (31 December 2016), AlphaGo won by resignation. White 36 was widely praised. Impacts on Go The documentary film AlphaGo raised hopes that Lee Sedol and Fan Hui would have benefitted from their experience of playing AlphaGo, but as of May 2018 their ratings were little changed; Lee Sedol was ranked 11th in the world, and Fan Hui 545th. On 19 November 2019, Lee announced his retirement from professional play, arguing that he could never be the top overall player of Go due to the increasing dominance of AI. Lee referred to them as being "an entity that cannot be defeated". See also Chinook (draughts player), draughts playing program Glossary of artificial intelligence Go and mathematics Leela (software) TD-Gammon, backgammon neural network Pluribus (poker bot) AlphaZero AlphaFold References External links AlphaGo wiki at Sensei's Library, including links to AlphaGo games AlphaGo page, with archive and games Estimated 2017 rating of Alpha Go 2015 software Artificial intelligence applications Go engines Google Applied machine learning
1350138
https://en.wikipedia.org/wiki/Micro-Controller%20Operating%20Systems
Micro-Controller Operating Systems
Micro-Controller Operating Systems (MicroC/OS, stylized as μC/OS) is a real-time operating system (RTOS) designed by Jean J. Labrosse in 1991. It is a priority-based preemptive real-time kernel for microprocessors, written mostly in the programming language C. It is intended for use in embedded systems. MicroC/OS allows defining several functions in C, each of which can execute as an independent thread or task. Each task runs at a different priority, and runs as if it owns the central processing unit (CPU). Lower priority tasks can be preempted by higher priority tasks at any time. Higher priority tasks use operating system (OS) services (such as a delay or event) to allow lower priority tasks to execute. OS services are provided for managing tasks and memory, communicating between tasks, and timing. History The MicroC/OS kernel was published originally in a three-part article in Embedded Systems Programming magazine and the book μC/OS The Real-Time Kernel by Jean J. Labrosse (). The author intended at first to simply describe the internals of a portable operating system he had developed for his own use, but later developed the OS as a commercial product in versions II and III. μC/OS-II Based on the source code written for μC/OS, and introduced as a commercial product in 1998, μC/OS-II is a portable, ROM-able, scalable, preemptive, real-time, deterministic, multitasking kernel for microprocessors, and digital signal processors (DSPs). It manages up to 64 tasks. Its size can be scaled (between 5 and 24 Kbytes) to only contain the features needed for a given use. Most of μC/OS-II is written in highly portable ANSI C, with target microprocessor-specific code written in assembly language. Use of the latter is minimized to ease porting to other processors. Uses in embedded systems μC/OS-II was designed for embedded uses. If the producer has the proper tool chain (i.e., C compiler, assembler, and linker-locator), μC/OS-II can be embedded as part of a product. μC/OS-II is used in many embedded systems, including: Avionics Medical equipment and devices Data communications equipment White goods (appliances) Mobile phones, personal digital assistants (PDAs), MIDs Industrial controls Consumer electronics Automotive Task states μC/OS-II is a multitasking operating system. Each task is an infinite loop and can be in any one of the following five states (see figure below additionally) Dormant Ready Running Waiting (for an event) Interrupted (interrupt service routine (ISR)) Further, it can manage up to 64 tasks. However, it is recommended that eight of these tasks be reserved for μC/OS-II, leaving an application up to 56 tasks. Kernels The kernel is the name given to the program that does most of the housekeeping tasks for the operating system. The boot loader hands control over to the kernel, which initializes the various devices to a known state and makes the computer ready for general operations. The kernel is responsible for managing tasks (i.e., for managing the CPU's time) and communicating between tasks. The fundamental service provided by the kernel is context switching. The scheduler is the part of the kernel responsible for determining which task runs next. Most real-time kernels are priority based. In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run. Two types of priority-based kernels exist: non-preemptive and preemptive. Nonpreemptive kernels require that each task do something to explicitly give up control of the CPU. A preemptive kernel is used when system responsiveness is more important. Thus, μC/OS-II and most commercial real-time kernels are preemptive. The highest priority task ready to run is always given control of the CPU. Assigning tasks Tasks with the highest rate of execution are given the highest priority using rate-monotonic scheduling. This scheduling algorithm is used in real-time operating systems (RTOS) with a static-priority scheduling class. Managing tasks In computing, a task is a unit of execution. In some operating systems, a task is synonymous with a process, in others with a thread. In batch processing computer systems, a task is a unit of execution within a job. The system user of μC/OS-II is able to control the tasks by using the following features: Task feature Task creation Task stack & stack checking Task deletion Change a task's priority Suspend and resume a task Get information about a task Managing memory To avoid fragmentation, μC/OS-II allows applications to obtain fixed-sized memory blocks from a partition made of a contiguous memory area. All memory blocks are the same size, and the partition contains an integral number of blocks. Allocation and deallocation of these memory blocks is done in constant time and is a deterministic system. Managing time μC/OS-II requires that a periodic time source be provided to keep track of time delays and timeouts. A tick should occur between 10 and 1000 times per second, or Hertz. The faster the tick rate, the more overhead μC/OS-II imposes on the system. The frequency of the clock tick depends on the desired tick resolution of an application. Tick sources can be obtained by dedicating a hardware timer, or by generating an interrupt from an alternating current (AC) power line (50 or 60 Hz) signal. This periodic time source is termed a clock tick. After a clock tick is determined, tasks can be: Delaying a task Resume a delayed task Communicating between tasks Intertask or interprocess communication in μC/OS-II occurs via: semaphores, message mailbox, message queues, tasks, and interrupt service routines (ISRs). They can interact with each other when a task or an ISR signals a task through a kernel object called an event control block (ECB). The signal is considered to be an event. μC/OS-III μC/OS-III is the acronym for Micro-Controller Operating Systems Version 3, introduced in 2009 and adding functionality to the μC/OS-II RTOS. μC/OS-III offers all of the features and functions of μC/OS-II. The biggest difference is the number of supported tasks. μC/OS-II allows only 1 task at each of 255 priority levels, for a maximum of 255 tasks. μC/OS-III allows any number of application tasks, priority levels, and tasks per level, limited only by processor access to memory. μC/OS-II and μC/OS-III are currently maintained by Micrium, Inc., a subsidiary of Silicon Labs, and can be licensed per product or per product line. Uses in embedded systems The uses are the same as for μC/OS-II Task states μC/OS-III is a multitasking operating system. Each task is an infinite loop and can be in any one of five states (dormant, ready, running, interrupted, or pending). Task priorities can range from 0 (highest priority) to a maximum of 255 (lowest possible priority). Round robin scheduling When two or more tasks have the same priority, the kernel allows one task to run for a predetermined amount of time, named a quantum, and then selects another task. This process is termed round robin scheduling or time slicing. The kernel gives control to the next task in line if: The current task has no work to do during its time slice, or The current task completes before the end of its time slice, or The time slice ends. Kernels The kernel functionality for μC/OS-III is the same as for μC/OS-II. Managing tasks Task management also functions the same as for μC/OS-II. However, μC/OS-III supports multitasking and allows an application to have any number of tasks. The maximum number of tasks is limited by only the amount of computer memory (both code and data space) available to the processor. A task can be implemented viarunning to scheduled completion, in which the task deletes itself when it is finished, or more typically as an infinite loop, waiting for events to occur and processing those events. Managing memory Memory management is performed in the same way as in μC/OS-II. Managing time μC/OS-III offers the same time managing features as μC/OS-II. It also provides services to applications so that tasks can suspend their execution for user-defined time delays. Delays are specified by a number of either clock ticks, or hours, minutes, seconds, and milliseconds. Communicating between tasks Sometimes, a task or ISR must communicate information to another task, because it is unsafe for two tasks to access the same specific data or hardware resource at once. This can be resolved via an information transfer, termed inter-task communication. Information can be communicated between tasks in two ways: through global data, or by sending messages. When using global variables, each task or ISR must ensure that it has exclusive access to variables. If an ISR is involved, the only way to ensure exclusive access to common variables is to disable interrupts. If two tasks share data, each can gain exclusive access to variables by either disabling interrupts, locking the scheduler, using a semaphore, or preferably, using a mutual exclusion semaphore. Messages can be sent to either an intermediate object called a message queue, or directly to a task, since in μC/OS-III, each task has its own built-in message queue. Use an external message queue if multiple tasks are to wait for messages. Send a message directly to a task if only one task will process the data received. While a task waits for a message to arrive, it uses no CPU time. Ports A port involves three aspects: CPU, OS, and board specific (BSP) code. μC/OS-II and μC/OS-III have ports for most popular processors and boards in the market and are suitable for use in safety critical embedded systems such as aviation, medical systems, and nuclear installations. A μC/OS-III port involves writing or changing the contents of three kernel specific files: OS_CPU.H, OS_CPU_A.ASM, and OS_CPU_C.C. It is necessary to write or change the content of three CPU specific files: CPU.H, CPU_A.ASM, and CPU_C.C. Finally create or change a board support package (BSP) for the evaluation board or target board being used. A μC/OS-III port is similar to a μC/OS-II port. There are significantly more ports than listed here, and ports are subject to continuous development. Both μC/OS-II and μC/OS-III are supported by popular SSL/TLS libraries such as wolfSSL, which ensure security across all connections. Licensing Change After acquisition by Silicon Labs, Micrium in 2020 has changed to an Open Source licensing model in February 2020. This includes uC/OS III, all prior versions, all components (USB, file system, GUI, TCP/IP, etc). Documentation and Support In addition to a typical support forum, a number of well-written books are available. Books are available as free PDFs, or for low-cost purchase as hard-cover books. A number of books are each tailored to a particular microcontroller architecture / development platform. Paid support is available from Micrium and others. References Sources Protocol Support for μC/OS-II from Fusion Embedded Micrium-uCOS-III-UsersManual 1st Edition uC/OS-III: The Real-Time Kernel for the Renesas RX62N External links Summary of Commonly Used uC/OS-II Functions and Data Structures NiosII GCC with MicroC/OS μC/OS-II Reference Manual How to Get a μC/OS-II Application Running Real-time operating systems Embedded operating systems ARM operating systems Microkernel-based operating systems Microkernels
18516910
https://en.wikipedia.org/wiki/List%20of%20British%20innovations%20and%20discoveries
List of British innovations and discoveries
The following is a list and timeline of innovations as well as inventions and discoveries that involved British people or the United Kingdom including predecessor states in the history of the formation of the United Kingdom. This list covers innovation and invention in the mechanical, electronic, and industrial fields, as well as medicine, military devices and theory, artistic and scientific discovery and innovation, and ideas in religion and ethics. The scientific revolution in 17th century Europe stimulated innovation and discovery in Britain. Experimentation was considered central to innovation by groups such as the Royal Society, which was founded in 1660. The English patent system evolved from its medieval origins into a system that recognised intellectual property; this encouraged invention and spurred on the Industrial Revolution from the late 18th century. During the 19th century, innovation in Britain led to revolutionary changes in manufacturing, the development of factory systems, and growth of transportation by railway and steam ship that spread around the world. In the 20th century, Britain's rate of innovation, measured by patents registered, slowed in comparison to other leading economies. 17th century 1605 Bacon's cipher, a method of steganography (hiding a secret message), is devised by Sir Francis Bacon. 1614 John Napier publishes his work Mirifici Logarithmorum Canonis Descriptio introducing the concept of logarithms which simplifies mathematical calculations. 1620 The first navigable submarine is designed by William Bourne and built by Dutchman Cornelius Drebbel. 1625 Early experiments in water desalination are conducted by Sir Francis Bacon. 1657 Anchor escapement for clock making is invented by Robert Hooke. 1667 A tin can telephone is devised by Robert Hooke. 1668 Sir Isaac Newton invents the first working reflecting telescope. 1698 The first commercial steam-powered device, a water pump, is developed by Thomas Savery. 18th century 1701 An improved seed drill is designed by Jethro Tull. It is used to spread seeds around a field with a rotating handle which makes seed planting a lot easier. 1705 Edmond Halley makes the first prediction of a comet's return. 1712 The first practical steam engine is designed by Thomas Newcomen. 1718 Edmond Halley discovers stellar motion. 1730 The Rotherham plough, the first plough to be widely built in factories and commercially successful, is patented by Joseph Foljambe. 1737 Andrew Rodger invents the winnowing machine. 1740 The first electrostatic motors are developed by Andrew Gordon in the 1740s. 1744 The earliest known reference to baseball is made in a publication, A Little Pretty Pocket-Book, by John Newbery. It contains a rhymed description of "base-ball" and a woodcut that shows a field set-up somewhat similar to the modern game—though in a triangular rather than diamond configuration, and with posts instead of ground-level bases. 1753 Invention of hollow-pipe drainage is credited to Sir Hugh Dalrymple who died in 1753. 1765 James Hargreaves invented the spinning jenny, which was a multi-spindle spinning frame, and was one of the key developments in the industrialisation of textile manufacturing during the early Industrial Revolution. James Small advances the design of the plough using mathematical methods to improve on the Scotch plough of James Anderson of Hermiston. 1767 Adam Ferguson (1767), often known as 'The Father of Modern Sociology', publishes his work An Essay on the History of Civil Society. 1776 Scottish economist Adam Smith, often known as 'The father of modern economics', publishes his seminal text The Wealth of Nations. The Watt steam engine, conceived in 1765, goes into production. It is the first type of steam engine to make use of steam at a pressure just above atmospheric. 1779 Samuel Crompton invented the spinning mule, which improved the industrialised production of thread for textile manufacture. The spinning mule combined features of James Hargreaves' spinning jenny and Richard Arkwright's water frame. 1781 The Iron Bridge, the first arch bridge made of cast iron, is built by Abraham Darby III. 1783 A pioneer of selective breeding and artificial selection, Robert Bakewell, forms the Dishley Society to promote and advance the interests of livestock breeders. 1786 The threshing machine is invented by Andrew Meikle. 1798 Edward Jenner invents the first vaccine. 19th century 1802 Sir Humphry Davy creates the first incandescent light by passing a current from a battery, at the time the world's most powerful, through a thin strip of platinum. 1804 The world's first locomotive-hauled railway journey is made by Richard Trevithick's steam locomotive. 1807 Alexander John Forsyth invents percussion ignition, the foundation of modern firearms. 1814 Robert Salmon patents the first haymaking machine. c1820 John Loudon McAdam develops the Macadam road construction technique. 1822 Charles Babbage proposes the idea for a Difference engine, an automatic mechanical calculator designed to tabulate polynomial functions, in a paper to the Royal Astronomical Society entitled "Note on the application of machinery to the computation of astronomical and mathematical tables". 1823 An improved system of soil drainage is developed by James Smith. 1824 William Aspdin obtains a patent for Portland cement (concrete). 1825 William Sturgeon invents the electromagnet. 1828 A mechanical reaping machine is invented by Patrick Bell. 1831 Electromagnetic induction, the operating principle of transformers and nearly all modern electric generators, is discovered by Michael Faraday. 1835 Scotsman James Bowman Lindsay invents the incandescent light bulb. 1836 The Marsh test for detecting arsenic poisoning is developed by James Marsh. 1837 Charles Babbage describes an Analytical Engine, the first mechanical, general-purpose programmable computer. The Cooke and Wheatstone telegraph, first commercially successful electric telegraph, is designed by Sir Charles Wheatstone and Sir William Fothergill Cooke. 1839 A pedal bicycle is invented by Kirkpatrick Macmillan. 1840 Sir Rowland Hill reforms the postal system with Uniform Penny Post and introduces the first postage stamp, the Penny Black, on 1 May. 1841 Alexander Bain patents his design produced the prior year for an electric clock. 1842 Superphosphate, the first chemical fertiliser, is patented by John Bennet Lawes. 1843 SS Great Britain, the world's first steam-powered, screw propeller-driven passenger liner with an iron hull is launched. Designed by Isambard Kingdom Brunel, it was at the time the largest ship afloat. Alexander Bain patents a design for a facsimile machine. 1846 A design for a chemical telegraph is patented by Alexander Bain. Bain's telegraph is installed on the wires of the Electric Telegraph Company on one line. Later, in 1850, it was used in America by Henry O'Reilly. 1847 Boolean algebra, the basis for digital logic, is introduced by George Boole in his book The Mathematical Analysis of Logic. 1851 Improvements to the facsimile machine are demonstrated by Frederick Bakewell at the 1851 World's Fair in London. 1852 A steam-driven ploughing engine is invented by John Fowler. 1853 English physician Alexander Wood develops a medical hypodermic syringe with a needle fine enough to pierce the skin. 1854 The Playfair cipher, the first literal digraph substitution cipher, is invented by Charles Wheatstone and later promoted for use by Lord Playfair. 1868 Mushet steel, the first commercial steel alloy, is invented by Robert Forester Mushet. Thomas Humber develops a bicycle design with the pedals driving the rear wheel. The first manually operated gas-lamp traffic lights are installed outside the Houses of Parliament on 10 December. 1869 A bicycle design is developed by Thomas McCall. 1873 Discovery of the photoconductivity of the element selenium by Willoughby Smith. This led to the invention of photoelectric cells (solar panels), including those used in the earliest television systems. 1876 Scotsman Alexander Graham Bell patents the telephone in the U.S. The first safety bicycle is designed by the English engineer Harry John Lawson (also called Henry). Unlike the penny-farthing, the rider's feet were within reach of the ground, making it safer to stop. 1878 Demonstration of an incandescent light bulb by Joseph Wilson Swan. 1883 The Fresno scraper, which became a model for modern earth movers, is invented in California by Scottish emigrant James Porteous. 1884 The light switch is invented by John Henry Holmes, Quaker of Newcastle. Reaction steam turbine invented by Anglo-Irish engineer Charles Algernon Parsons. 1885 The first commercially successful safety bicycle, called the Rover, is designed by John Kemp Starley. The following year Dan Albone produces a derivative of this called the Ivel Safety cycle. 1886 Walter Parry Haskett Smith, often called the Father of Rock Climbing in Britain, completes his first ascent of the Napes Needle, solo and without any protective equipment. 1892 Sir Francis Galton devises a method for classifying fingerprints that proved useful in forensic science. 1897 Sir Joseph John Thomson discovers the electron. The world's first wireless station is established on the Isle of Wight. 20th century 1901 The first wireless signal across the Atlantic is sent from Cornwall in England and received in Newfoundland in Canada (a distance of 2,100 miles) by Italian scientist Guglielmo Marconi. The first commercially successful light farm tractor is patented by Dan Albone. 1902 Edgar Purnell Hooley develops Tarmac 1906 The introduction of , a revolutionary capital ship design. 1907 Henry Joseph Round discovers electroluminescence, the principle behind LEDs. 1910 The first formal driving school, the British School of Motoring, is founded in London. Frank Barnwell establishes the fundamentals of aircraft design at the University of Glasgow, having made the first powered flight in Scotland the previous year. 1916 The first use in battle of the military tank (although the tank was also developed independently elsewhere). 1918 The Royal Air Force becomes the first independent air force in the world The introduction of HMS Argus the first example of the standard pattern of aircraft carrier, with a full-length flight deck that allowed wheeled aircraft to take off and land. 1922 In Sorbonne, France, Englishman Edwin Belin demonstrates a mechanical scanning device, an early precursor to modern television. 1926 John Logie Baird makes the first public demonstration of a mechanical television on 26 January (the first successful transmissions were in early 1923 and February 1924). Later, in July 1928, he demonstrated the first colour television. 1930 The jet engine is patented by Sir Frank Whittle. 1932 The Anglepoise lamp is patented by George Carwardine, a design consultant specialising in vehicle suspension systems. 1933 The Cat's eye road marking is invented by Percy Shaw and patented the following year. 1936 English economist John Maynard Keynes publishes his work The General Theory of Employment, Interest and Money which challenged the established classical economics and led to the Keynesian Revolution in the way economists thought. The world's first public broadcasts of high-definition television are made from Alexandra Palace, North London, by the BBC Television Service. It is the first fully electronic television system to be used in regular broadcasting. 1937 First available in the London area, the 999 telephone number is introduced as the world's first emergency telephone service. 1939 The initial design of the Bombe, an electromechanical device to assist with the deciphering of messages encrypted by the Enigma machine, is produced by Alan Turing at the Government Code and Cypher School (GC&CS). 1943 Colossus computer begins working, the world's first electronic digital programmable computer. 1949 The Manchester Mark 1 computer, significant because of its pioneering inclusion of index registers, ran its first programme error free. Its chief designers are Freddie Williams and Tom Kilburn. 1951 The concept of microprogramming is developed by Maurice Wilkes from the realisation that the Central Processing Unit (CPU) of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. LEO is the first business application (a payroll system) on an electronic computer. 1952 The introduction of the de Havilland Comet the world's first commercial jet airliner. Autocode, regarded as the first compiled programming language, is developed for the Manchester Mark 1 by Alick Glennie. 1953 Englishman Francis Crick and American James Watson of Cavendish Laboratory in the University of Cambridge, analysed X-ray crystallography data taken by Rosalind Franklin of King's College London, to decipher the double helical structure of DNA. They share the 1962 Nobel Prize in Medicine for their work. 1955 The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, is built by Louis Essen at the National Physical Laboratory. This clock enabled further development of general relativity, and started a basis for an enhanced SI unit system. 1956 Metrovick 950, the first commercial transistor computer, is built by the Metropolitan-Vickers company. 1961 The first electronic desktop calculators, the ANITA Mk7 and ANITA Mk8, are manufactured by the Bell Punch Company and marketed by its Sumlock Comptometer division. 1963 High strength carbon fibre is invented by engineers at the Royal Aircraft Establishment. The Lava lamp is invented by British accountant Edward Craven Walker. 1964 The first theory of the Higgs boson is put forward by Peter Higgs, a particle-physics theorist at the University of Edinburgh, and five other physicists. The particle is discovered in 2012 at CERN's Large Hadron Collider and its existence is confirmed in 2013. 1965 A pioneer of the development of dairy farming systems, Rex Paterson, set out his principles for labour management. The Touchscreen was invented by E.A.Johnson working at the Radar Research Establishment, Malvern, Worcestershire. 1966 The cash machine and personal identification number system are patented by James Goodfellow. 1969 The first carbon fibre fabric in the world is weaved in Stockport, England. 1970 One of the first handheld televisions, the MTV-1, is developed by Sir Clive Sinclair. 1973 Clifford Cocks develops the algorithm for the RSA cipher while working at the Government Communications Headquarters, approximately three years before it was independently developed by Rivest, Shamir and Adleman at MIT. The British government declassified the 1973 invention in 1997. 1976 M. Stanley Whittingham develops the first Lithium-ion battery, while working as a researcher for ExxonMobil. 1977 Steptoe and Edwards successfully carried out a pioneering conception which resulted in the birth of the world's first baby to be conceived by IVF, Louise Brown on 25 July 1978, in Oldham General Hospital, Greater Manchester, UK. 1979 The tree shelter is invented by Graham Tuley to protect tree seedlings. One of the first laptop computers, the GRiD Compass, is designed by Bill Moggridge. 1984 DNA profiling is discovered by Sir Alec Jeffreys at the University of Leicester. One of the world's first computer games to use 3D graphics, Elite, is developed by David Braben and Ian Bell. 1989 Sir Tim Berners-Lee writes a proposal for what will become the World Wide Web. The following year, he specified HTML, the hypertext language, and HTTP, the protocol. The Touchpad pointing device is first developed for Psion computers. 1991 A patent for an iris recognition algorithm is filed by John Daugman while working at the University of Cambridge which became the basis of all publicly deployed iris recognition systems. The source code for the world's first web browser, called WorldWideWeb (later renamed Nexus to avoid confusion with the World Wide Web), is released into the public domain by Sir Tim Berners-Lee. 1992 The first SMS message in the world is sent over the UK's GSM network. 1995 The world's first national DNA database is developed. 1996 Animal cloning, a female domestic sheep became the first mammal cloned from an adult somatic cell, by scientists at the Roslin institute. 1997 Scottish scientists at the Roslin Institute in Edinburgh, produce the first mammal cloned from an adult cell. The ThrustSSC jet-propelled car, designed and built in England, sets the land speed record. 21st century 2003 Beagle 2, a British landing spacecraft that forms part of the European Space Agency's 2003 Mars Express mission lands on the surface of Mars but fails to communicate. It is located twelve years later in a series of images from NASA's Mars Reconnaissance Orbiter that suggest two of Beagle's four solar panels failed to deploy, blocking the spacecraft's communications antenna. 2004 Graphene is isolated from graphite at the University of Manchester by Andre Geim and Konstantin Novoselov. 2005 The design for a machine to lay rail track, the "Trac Rail Transposer", is patented and goes on to be used by Network Rail in the United Kingdom and the New York City Subway in the United States. 2012 Raspberry Pi, a single-board computer, is launched and quickly becomes popular for education in programming and computer science. 2014 The European Space Agency's Philae lander leaves the Rosetta spacecraft and makes the first ever landing on a comet. The Philae lander was built with significant British expertise and technology, alongside that of several other countries. 2016 SABRE or Synergetic Air Breathing Rocket Engine is an example of a Rocket-Jet hybrid hypersonic air-breathing rocket engine. Ceramics Bone china – Josiah Spode Ironstone china – Charles James Mason Jasperware – Josiah Wedgwood Clock making Anchor escapement – Robert Hooke Balance wheel – Robert Hooke Coaxial escapement – George Daniels Grasshopper escapement, H1, H2, H3 and H4 watches (a watch built to solve the longitude measurement problem) – John Harrison Gridiron pendulum – John Harrison Lever escapement The greatest single improvement ever applied to pocket watches – Thomas Mudge Longcase clock or grandfather clock – William Clement Marine chronometer – John Harrison Self-winding watch – John Harwood Clothing manufacturing Derby Rib (stocking manufacture) – Jedediah Strutt Flying shuttle – John Kay Mauveine, the first synthetic organic dye – William Henry Perkin Power loom – Edmund Cartwright Spinning frame – John Kay Spinning jenny – James Hargreaves Spinning mule – Samuel Crompton Sewing machine – Thomas Saint in 1790 Water frame – Richard Arkwright Stocking frame – William Lee Warp-loom and Bobbinet – John Heathcoat Communications Christmas card – Sir Henry Cole Clockwork radio – Trevor Baylis Electromagnetic induction & Faraday's law of induction Began as a series of experiments by Faraday that later became some of the first ever experiments in the discovery of radio waves and the development of radio – Michael Faraday Fiber optics pioneer in telecommunications – Charles K. Kao and George Hockham Geostationary satellites concept originator for the use of telecommunications relays – Arthur C Clarke Kennelly–Heaviside layer first proposed, a layer of ionised gas that reflects radio waves around the Earth's curvature – Oliver Heaviside Light signalling between ships: Admiral Philip H. Colomb (1831–1899) Mechanical pencil – Sampson Mordan and John Isaac Hawkins in 1822. Pencil – Cumbria, England Pitman Shorthand – Isaac Pitman Adhesive postage stamp and the postmark – James Chalmers (1782–1853) Radar – Robert Watson-Watt (1892–1973) Radio, the first transmission using a Spark Transmitter, achieving a range of approximately 500 metres. – David E. Hughes Underlying principles of Radio – James Clerk Maxwell (1831–1879) Radio communication development pioneer– William Eccles Roller printing – Thomas Bell (patented 1783) Long-lasting materials for today's liquid crystal displays – Team headed by Sir Brynmor Jones and Developed by Scotsman George Gray and Englishman Ken Harrison In conjunction with the Royal Radar Establishment and the University of Hull Shorthand – Timothy Bright (1550/1-1615). Invented first modern shorthand Developed 'binaural sound' for the Stereo– Alan Blumlein Print stereotyping – William Ged (1690–1749) Teletext Information Service – The British Broadcasting Corporation (BBC) Totalisator – George Julius Typewriter – First patent for a device similar to a typewriter granted to Henry Mill in 1714. Teleprinter – Frederick G. Creed (1871–1957) Universal Standard Time: Sir Sandford Fleming (1827–1915) Valentines card – Modern card 18th century England Computing ACE and Pilot ACE – Alan Turing ARM architecture The ARM CPU design is the microprocessor architecture of 98% of mobile phones and every smartphone. Atlas, an early supercomputer and was the fastest computer in the world until the release of the American CDC 6600. This machine introduced many modern architectural concepts: spooling, interrupts, instruction pipelining, interleaved memory, virtual memory and paging – Team headed by Tom Kilburn The first graphical computer game OXO on the EDSAC at Cambridge University – A.S. Douglas First computer generated music was played by the Ferranti Mark 1 computer – Christopher Strachey Denotational semantics – Christopher Strachey pioneer in programming language design Deutsch–Jozsa algorithm and first universal quantum computer described – David Deutsch Digital audio player – Kane Kramer EDSAC was the first complete, fully functional computer to use the von Neumann architecture, the basis of every modern computer – Maurice Wilkes EDSAC 2 the successor to the Electronic Delay Storage Automatic Calculator or EDSAC. It was the first computer to have a microprogrammed (Microcode) control unit and a bit slice hardware architecture – Team headed by Maurice Wilkes Ferranti Mark 1 – Also known as the Manchester Electronic Computer was the first computer to use the principles of early CPU design (Central processing unit) – Freddie Williams and Tom Kilburn – Also the world's first successful commercially available general-purpose electronic computer. Flip-flop circuit, which became the basis of electronic memory (Random-access memory) in computers – William Eccles and F. W. Jordan Conceptualised Integrated Circuit – Geoffrey W.A. Dummer Josephson effect and theorised Pi Josephson junction and Josephson junction – Brian David Josephson Heavily involved in the development of the Linux kernel – Andrew Morton & Alan Cox Manchester Baby was the world's first electronic stored-program computer. Developed by Frederic Calland Williams & Tom Kilburn Osborne 1 The first commercially successful portable computer, the precursor to the Laptop computer – Adam Osborne Packet switching co-invented by British engineer Donald Davies and American Paul Baran – National Physical Laboratory, London England First PC-compatible palmtop computer (Atari Portfolio) – Ian H. S. Cullimore First programmer – Ada Lovelace First Programming Language Analytical Engine ordercode – Charles Babbage and Ada Lovelace (Psion Organiser) world's first handheld computer – Psion PLC First experimental quantum algorithm demonstrated on a working 2-qubit NMR quantum computer used to solve Deutsch's problem - Jonathan A. Jones. The first rugged computer – Husky (computer) Sumlock ANITA calculator the world's first all-electronic desktop calculator – Bell Punch Co Sinclair Executive was the first 'slimline' pocket calculator, amongst other electrical/electronic innovations – Sir Clive Sinclair Co-Inventor of the first trackball device – developed by Tom Cranston, Fred Longstaff and Kenyon Taylor Universal Turing machine – The UTM model is considered to be the origin of the "stored program computer" used by John von Neumann in 1946 for his "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture, also UTM is considered the first operating system – Alan Turing Williams tube – a cathode ray tube used to electronically store binary data (Can store roughly 500 to 1,000 bits of data) – Freddie Williams & Tom Kilburn Wolfram's 2-state 3-symbol Turing machine – Stephen Wolfram Engineering Adjustable spanner – Edwin Beard Budding Backhoe loader – Joseph Cyril Bamford First coke-consuming Blast Furnace – Abraham Darby I First working and volume production Brushless Alternator – Newage Engineers Carey Foster bridge – Carey Foster Cavity magnetron – John Randall and Harry Boot critical component for Microwave generation in Microwave ovens and high powered Radios (Radar) First compression ignition engine aka the Diesel Engine – Herbert Akroyd Stuart Hydraulic crane – William George Armstrong Crookes tube the first cathode ray tubes – William Crookes The first electrical measuring instrument, the electroscope – William Gilbert Fourdrinier machine – Henry Fourdrinier Francis turbine – James B. Francis Gas turbine – John Barber (engineer) Hot air engine (open system) – George Cayley Hot bulb engine or heavy oil engine – Herbert Akroyd Stuart Hydraulic accumulator The world's first house powered with hydroelectricity – Cragside, Northumberland Hydrogen Fuel Cell – William Robert Grove Internal combustion engine – Samuel Brown light-emitting diode (did not invent the first visible light, only theorised) – H. J. Round Linear motor is a multi-phase alternating current (AC) electric motor – Charles Wheatstone then improved by Eric Laithwaite First person to person to publicly predict and describe (although not the inventor) of the Microchip – Geoffrey W.A. Dummer Microturbines – Chris and Paul Bladon of Bladon Jets The world's first oil refinery and a process of extracting paraffin from coal laying the foundations for the modern oil industry – James Young (1811–1883) Pendulum governor – Frederick Lanchester Modified version of the Newcomen steam engine (Pickard engine) – James Pickard Contributed to the development of Radar – Scotsman Robert Watson-Watt and Englishman Arnold Frederic Wilkins Pioneer of radio guidance systems – Archibald Low Screw-cutting lathe – Henry Hindley The first industrially practical screw-cutting lathe – Henry Maudslay Devised a standard for screw threads leading to its widespread acceptance – Joseph Whitworth Rectilinear Slide rule – William Oughtred Compound steam turbine – Charles Algernon Parsons Stirling engine – Robert Stirling Supercharger – Dugald Clerk Electric transformer – Michael Faraday Two-stroke engine – Joseph Day The Wimshurst machine is an Electrostatic generator for producing high voltages – James Wimshurst Wind tunnel – Francis Herbert Wenham Vacuum diode also known as a vacuum tube – John Ambrose Fleming Household appliances Perambulator – William Kent designed a baby carriage in 1733 Collapsible baby buggy – Owen Maclaren Domestic dishwasher – key modifications by William Howard Livens "Bagless" vacuum cleaner – James Dyson "Puffing Billy" – First powered vacuum cleaner – Hubert Cecil Booth Fire extinguisher – George William Manby Folding carton – Charles Henry Foyle Lawn mower – Edwin Beard Budding Rubber band – Stephen Perry Daniell cell – John Frederic Daniell Tin can – Peter Durand Corkscrew – Reverend Samuell Henshall Mouse trap – James Henry Atkinson Modern flushing toilet – John Harington The pay toilet – John Nevil Maskelyne, Maskelyne invented a lock for London toilets, which required a penny to operate, hence the euphemism "spend a penny". Electric toaster – Rookes Evelyn Bell Crompton Teasmade – Albert E. Richardson Magnifying glass – Roger Bacon Thermosiphon, which forms the basis of most modern central heating systems – Thomas Fowler Automatic electric kettle – Russell Hobbs Thermos Flask – James Dewar Toothbrush – William Edward Addis Sunglasses – James Ayscough The Refrigerator – William Cullen (1748) The Flush toilet: Alexander Cummings (1775) The first distiller to triple distill Irish whiskey:John Jameson (Whisky distiller) The first automated can-filing machine John West (1809–1888) The waterproof Mackintosh – Charles Macintosh (1766–1843) The kaleidoscope: Sir David Brewster (1781–1868) Keiller's marmalade Janet Keiller (1797) – The first recipe of rind suspended marmalade or Dundee marmalade produced in Dundee. The modern lawnmower – Edwin Beard Budding (1830) The Lucifer friction match: Sir Isaac Holden (1807–1897) The self filling pen – Robert Thomson (1822–1873) Cotton-reel thread – J & J Clark of Paisley Lime Cordial – Peter Burnett in 1867 Bovril beef extract – John Lawson Johnston in 1874 Wellington Boots Can Opener – Robert Yeates 1855 Ideas, religion and ethics Agnosticism by Thomas Henry Huxley Anglicanism by Henry VIII of England Classical Liberalism – John Locke known as the "Father of Classical Liberalism". Malthusianism and the groundwork for the study of population dynamics – Thomas Robert Malthus with his work An Essay on the Principle of Population. Methodism by John Wesley and Charles Wesley Quakerism by George Fox Utilitarianism by Jeremy Bentham Industrial processes English crucible steel – Benjamin Huntsman Steel production Bessemer process – Henry Bessemer Hydraulic press – Joseph Bramah Parkesine, the first man-made plastic – Alexander Parkes Portland cement – Joseph Aspdin Sheffield plate – Thomas Boulsover Water frame – Richard Arkwright Stainless steel – Harry Brearley Rubber Masticator – Thomas Hancock Power Loom – Edmund Cartwright Parkes process – Alexander Parkes Lead chamber process – John Roebuck Development of the world's first commercially successful manufacture of high quality flat glass using the float glass process – Alastair Pilkington The first commercial electroplating process – George Elkington The Wilson Yarn Clearer – Peter Wilson Float Glass – Alastair Pilkington – Modern Glass manufacturing process Contact Process Froth Flotation – William Haynes and A H Higgins. Extrusion – Joseph Bramah Medicine First correct description of circulation of the blood – William Harvey Smallpox vaccine – Edward Jenner with his discovery is said to have "saved more lives (...) than were lost in all the wars of mankind since the beginning of recorded history." Surgical forceps – Stephen Hales Antisepsis in surgery – Joseph Lister Artificial intraocular lens transplant surgery for cataract patients – Harold Ridley Clinical thermometer – Thomas Clifford Allbutt. isolation of fibrinogen ("coagulable lymph"), investigation of the structure of the lymphatic system and description of red blood cells by the surgeon William Hewson (surgeon) Credited with discovering how to culture embryonic stem cells in 1981 – Martin Evans First blood pressure measurement and first cardiac catheterisation-Stephen Hales Pioneer of anaesthesia and father of epidemiology for locating the source of cholera – John Snow (physician) pioneered the use of sodium cromoglycate as a remedy for asthma – Roger Altounyan The first scientist to demonstrate that a cancer may be caused by an environmental carcinogen and one of the founders of orthopedy – Percivall Pott Performed the first successful blood transfusion – James Blundell Discovered the active ingredient of Aspirin – Edward Stone Discovery of Protein crystallography – Dorothy Crowfoot Hodgkin The world's first successful stem cell transplant – John Raymond Hobbs First typhoid vaccine – Almroth Wright Pioneer of the treatment of epilepsy – Edward Henry Sieveking discovery of Nitrous oxide (entonox/"laughing gas") and its anaesthetic properties – Humphry Davy Computed Tomography (CT scanner) – Godfrey Newbold Hounsfield Gray's Anatomy widely regarded as the first complete human anatomy textbook – Henry Gray Discovered Parkinson's disease – James Parkinson General anaesthetic – Pioneered by Scotsman James Young Simpson and Englishman John Snow Contributed to the development of magnetic resonance imaging (MRI) – Sir Peter Mansfield Statistical parametric mapping – Karl J. Friston Nasal cannulaWilfred Jones The development of in vitro fertilization – Patrick Christopher Steptoe and Robert Geoffrey Edwards First baby genetically selected to be free of a breast cancer – University College London Viagra – Peter Dunn, Albert Wood, Dr Nicholas Terrett Acetylcholine – Henry Hallett Dale EKG (underlying principles) – various Discovery of vitamins – Frederick Gowland Hopkins Earliest pharmacopoeia in English The hip replacement operation, in which a stainless steel stem and 22mm head fit into a polymer socket and both parts are fixed into position by PMMA cement – pioneered by John Charnley In vitro fertilisation – Developed by Sir Robert Geoffrey Edwards with a first successful birth in 1978 as a result of natural cycle IVF where no stimulation was made. Description of Hay fever – John Bostock (physician) in 1819 Pioneering the use of surgical anaesthesia with Chloroform: Sir James Young Simpson (1811–1870) Discovery of hypnotism (November 1841) – James Braid (1795–1860) Identifying the mosquito as the carrier of malaria: Sir Ronald Ross (1857–1932) Identifying the cause of brucellosis: Sir David Bruce (1855–1931) Discovering the vaccine for typhoid fever: Sir William B. Leishman (1865–1926) Discovering insulin – John J R Macleod (1876–1935) with others Ambulight PDT: light-emitting sticking plaster used in photodynamic therapy (PDT) for treating non-melanoma skin cancer. Developed by Ambicare Dundee's Ninewells Hospital and St Andrews University. (2010) Primary creator of the artificial kidney (Professor Kenneth Lowe – Later Queen's physician in Scotland) Developing the first beta-blocker drugs: Sir James W. Black in 1964 Glasgow Coma Scale: Graham Teasdale and Bryan J. Jennett (1974) EKG [Electrocardiography]: Alexander Muirhead (1911) Pioneering the use of surgical anaesthesia with Chloroform: Sir James Young Simpson (1811–1870) Discovery of hypnotism (November 1841) – James Braid (1795–1860) Identifying the cause of brucellosis: Sir David Bruce (1855–1931) Development of ibuprofen Discovering the vaccine for typhoid fever: Sir William B. Leishman (1865–1926) The earliest discovery of an antibiotic, penicillin: Sir Alexander Fleming (1881–1955) Discovering an effective tuberculosis treatment: Sir John Crofton in the 1950s Primary creator of the artificial kidney (Professor Kenneth Lowe – Later Queen's physician in Scotland) Developing the first beta-blocker drugs: Sir James W. Black in 1964 EKG [Electrocardiography]: Alexander Muirhead (1911) Discovering secretin, the first hormone, and its role as a chemical messenger: William Bayliss and Ernest Starling. Military Aircraft carrier – Angled Flight Deck, Optical Landing System and Steam catapult for Aircraft Carriers-Dennis Cambell CB DSC, Nicholas Goodhart and Commander Colin C. Mitchell RNVR respectively Armstrong Gun – Sir William Armstrong Bailey bridge – Donald Bailey Battle Tank/The tank – During WWI, developed separately in Britain and France, and first used in combat by the British. In Britain designed by Walter Gordon Wilson and William Tritton. Bouncing bomb – Barnes Wallis Bullpup firearm configuration – Thorneycroft carbine Chobham armour Congreve rocket – William Congreve Depth charge Dreadnought battleship – The side by side Boxlock action, AKA the double barreled shotgun – Anson and Deeley Percussion ignition Turret ship – Although designs for a rotating gun turret date back to the late 18th century, was the first warship to be outfitted with one. Fairbairn-Sykes Fighting Knife – William Ewart Fairbairn and Eric A. Sykes Fighter aircraft – The Vickers F.B.5 Gunbus of 1914 was the first of its kind. Safety fuse – William Bickford H2S radar (airborne radar to aid bomb targeting) – Alan Blumlein Harrier Jump Jet – VTOL (Vertical take-off and landing aircraft) High explosive squash head – Sir Charles Dennistoun Burney Livens Projector – William Howard Livens The first self-powered machine gun Maxim gun – Sir Hiram Maxim, Although the Inventor is American, the Maxim gun was financed by Albert Vickers of Vickers Limited company and produced in Hatton Garden London Mills bombthe first modern fragmentation grenade. Nuclear fission chain reaction – Leo Szilard whilst crossing the road near Russell Square. Puckle Gun – James Puckle Rubber bullet and Plastic bullet – Developed by the Ministry of Defence during The Troubles in Northern Ireland. Self-propelled gun - The Gun Carrier Mark I was the first piece of Self-propelled artillery ever to be produced. Shrapnel shell – Henry Shrapnel Smokeless propellant to replace gunpowder with the use of Cordite – Frederick Abel The world's first practical underwater active sound detection apparatus, the ASDIC Active Sonar – Developed by Canadian physicist Robert William Boyle and English physicist Albert Beaumont Wood Special forces – SAS Founded by Sir David Stirling. Stun grenades – invented by the Special Air Service in the 1960s. Torpedo – Robert Whitehead The Whitworth rifle, considered the first sniper rifle. During the American Civil War the Whitworth rifle had been known to kill at ranges of about – Sir Joseph Whitworth Mining Beam engine – Used for pumping water from mines Davy lamp – Humphry Davy Geordie lamp – George Stephenson Tunnel boring machine – James Henry Greathead and Isambard Kingdom Brunel Musical instruments Concertina – Charles Wheatstone Theatre organ – Robert Hope-Jones Logical bassoon, an electronically controlled version of the bassoon – Giles Brindley Northumbrian smallpipes Tuning fork – John Shore The piano footpedal – John Broadwood (1732–1812) Photography Ambrotype – Frederick Scott Archer Calotype – William Fox Talbot Cinematography – William Friese-Greene Collodion process – Frederick Scott Archer Collodion-albumen process – Joseph Sidebotham in 1861 Dry plate process also known as gelatine process, is the first economically successful durable photographic medium – Richard Leach Maddox First Film called "The Horse in Motion" in 1878 – Eadweard Muybridge Kinetoscope the first Motion picture camera – William Kennedy Laurie Dickson Kinemacolor was the first successful colour motion picture process, used commercially from 1908 to 1914 – George Albert Smith The first movie projector, the Zoopraxiscope – Eadweard Muybridge Photographic negative - William Fox Talbot Thomas Wedgwood – pioneer of photography, devised the method to copy visible images chemically to permanent media. Single-lens reflex camera and earliest Panoramic Camera with wide-angle lens - Thomas Sutton Stereoscope – Charles Wheatstone Publishing firsts Oldest publisher and printer in the world (having been operating continuously since 1584): Cambridge University Press first book printed in English: "The Recuyell of the Historyes of Troye" by Englishman William Caxton in 1475 The first edition of the Encyclopædia Britannica (1768–81) The first English textbook on surgery(1597) The first modern pharmacopoeia, William Cullen (1776) The book became 'Europe's principal text on the classification and treatment of disease' The first postcards and picture postcards in the UK Science Triple achromatic lens – Peter Dollond Joint first to discover alpha decay via quantum tunnelling – Ronald Wilfred Gurney Alpha and Beta rays discovered – Ernest Rutherford Argon element discovered– John Strutt, 3rd Baron Rayleigh with Scotsman William Ramsay Atom (nuclear model of) discovered– Ernest Rutherford Atomic theory – Considered the father of modern chemistry, John Dalton's experiments with gases led to the development of what is called the modern atomic theory. Atwood machine used for illustrating the law of uniformly accelerated motion – George Atwood Barometer (Marine) – Robert Hooke Bell's theorem – John Stewart Bell Calculus – Sir Isaac Newton Cell biology – Credit for the discovery of the first cells is given to Robert Hooke who described the microscopic compartments of cork cells in 1665 Chromatography (Partition) – Richard Laurence Millington Synge and Archer J.P. Martin Coggeshall slide rule – Henry Coggeshall Correct theory of combustion – Robert Hooke Coumarin synthesised, one of the first synthetic perfumes, and cinnamic acid via the Perkin reaction – William Henry Perkin Dew Point Hygrometer – John Frederic Daniell Earnshaw's theorem – Samuel Earnshaw Electrical generator (dynamo) – Michael Faraday Electromagnet – William Sturgeon in 1823. Electron and isotopes discovered – J. J. Thomson Equals sign Robert Recorde Erbium-doped fibre amplifier - Sir David N. Payne Faraday cage – Michael Faraday First Law of Thermodynamics demonstrated that electric circuits obey the law of the conservation of energy and that electricity is a form of energy . Also the unit of energy, the Joule is named after him – James Prescott Joule Hawking radiation – Stephen Hawking Helium – Norman Lockyer Holography – First developed by Dennis Gabor in Rugby, England. Improved by Nicholas J. Phillips who made it possible to record multi-colour reflection holograms Hooke's Law (equation describing elasticity) – Robert Hooke Infrared radiation – discovery commonly attributed to William Herschel. Iris diaphragm – Robert Hooke The Law of Gravity – Sir Isaac Newton Magneto-optical effect – Michael Faraday Mass spectrometer invented - J. J. Thomson Maxwell's equations - James Clerk Maxwell Micrometer – William Gascoigne Micrometer (first bench one) that was capable of measuring to one ten thousandth of an inch – Henry Maudslay Neutron discovered – James Chadwick Newtonian telescope – Sir Isaac Newton Newton's laws of motion – Sir Isaac Newton First full-scale commercial Nuclear Reactor at Calder Hall, opened in 1956. Nuclear transfer – Is a form of cloning first put into practice by Ian Wilmut and Keith Campbell to clone Dolly the Sheep Oxygen gas (O2) discovered – Joseph Priestley Pell's equation – John Pell Penrose graphical notation – Roger Penrose Periodic Table – John Alexander Reina Newlands pion and (pi-meson) discovered – Cecil Frank Powell Pre-empting elements of General Relativity theory – William Kingdon Clifford Proton discovered – Ernest Rutherford Radar pioneering development – Arnold Frederic Wilkins Rayleigh scattering, form of Elastic scattering discovered - John William Strutt, 3rd Baron Rayleigh Seismograph – John Milne Sinclair Executive, the world's first small electronic pocket calculator – Sir Clive Sinclair Slide rule – William Oughtred Standard deviation – Francis Galton Symbol for "is less than" and "is greater than" – Thomas Harriot 1630 Theory of Evolution – Charles Darwin Thomson scattering - J. J. Thomson Weather map – Sir Francis Galton Wheatstone bridge – Samuel Hunter Christie "×" symbol for multiplication as well as the abbreviations "sin" and "cos" for the sine and cosine functions – William Oughtred Astronomy Discovery of the "White Spot" on Saturn – Will Hay Discovery of Proxima Centauri, the closest known star to the Sun, by Robert Innes (1861–1933) Discovery of the planet Uranus and the moons Titania, Oberon, Enceladus, Mimas by Sir William Herschel (German born astronom, later in life British) Discovery of Triton and the moons Hyperion, Ariel and Umbriel – William Lassell Planetarium – John Theophilus Desaguliers Predicts the existence and location of Neptune from irregularities in the orbit of Uranus – John Couch Adams Important contributions to the development of radio astronomy – Bernard Lovell Newtonian telescope – Sir Isaac Newton Achromatic doublet lens – John Dollond Coining the phrase 'Big Bang' – Fred Hoyle First theorised existence of black holes, binary stars; invented torsion balance – John Michell Stephen Hawking – World-renowned theoretical physicist made many important contributions to the fields of cosmology and quantum gravity, especially in the context of black holes Spiral galaxies – William Parsons, 3rd Earl of Rosse Discovery of Halley's Comet – Edmond Halley Discovery of pulsars – Antony Hewish Discovery of Sunspots and was the first person to make a drawing of the Moon through a telescope – Thomas Harriot The Eddington limit, the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object – Arthur Stanley Eddington Aperture synthesis, used for accurate location and imaging of weak radio sources in the field of Radio astronomy – Martin Ryle and Antony Hewish Chemistry Aluminium first discovered – Sir Humphry Davy Concept of Atomic Number introduced to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight – Henry Moseley Baconian method, an early forerunner of the scientific method – Sir Francis Bacon Benzene first isolated, the first known aromatic hydrocarbon – Michael Faraday Boron first isolated – Humphry Davy Bragg's law and establish the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances – William Henry Bragg and William Lawrence Bragg Buckminsterfullerene discovered – Sir Harry Kroto Callendar effect the theory that linked rising carbon dioxide concentrations in the atmosphere to global temperature (Global warming) – Guy Stewart Callendar Chemical Oceanography established : Robert Boyle. Dalton's law and Law of multiple proportions – John Dalton The structure of DNA and pioneering the field of molecular biology – co-developed by Francis Crick and the American James Watson DNA sequencing by chain termination – Frederick Sanger Electrolysis and electrochemistry discovered : William Nicholson and Anthony Carlisle. Chemical Fertilizer invented : John Lawes Structure of Ferrocene discovered – Geoffrey Wilkinson & others Pioneer of the Fuel Cell – Francis Thomas Bacon Henderson limit - Richard Henderson Hydrogen discovered as a colorless, odourless gas that burns and can form an explosive mixture with air – Henry Cavendish Introns discovered in eukaryotic DNA and the mechanism of gene-splicing – Richard J. Roberts Concept of Isotopes first proposed, elements with the same chemical properties may have differing atomic weights – Frederick Soddy Josephson voltage standard - Brian Josephson Kerosene invented : Abraham Gesner and James Young. Kinetic theory of gases developed : James Maxwell. Proposes the law of octaves, a precursor to the Periodic Law – John Newlands Pioneer of Meteorology by developing a nomenclature system for clouds in 1802 – Luke Howard Potassium first isolated – Humphry Davy Rayleigh scattering explains why the sky is blue, and predicted the existence of the surface waves – John Strutt, 3rd Baron Rayleigh Silicones discovered : Frederic Kipping. Publishes Opus Maius, which among other things, proposes an early form of the Scientific Method, and contains results of his experiments with Gunpowder – Roger Bacon Publishes several Aristotelian commentaries, an early framework for the Scientific Method – Robert Grosseteste Sodium first isolated – Humphry Davy Thallium discovered – William Crookes Valence discovered : Edward Frankland. Chemical composition of Water discovered : Henry Cavendish. Weston cell – Edward Weston (chemist) The synthesising of Xenon hexafluoroplatinate the first time to show that noble gases can form chemical compounds – Neil Bartlett Sport Football – The rules as we know them today were established in 1848 at Cambridge University, Sheffield F.C. is acknowledged by The Football Association and FIFA as the world's first and oldest football club. Rugby – William Webb Ellis Cricket – the world's second-most popular sport can be traced back to the 13th century Tennis – widely known to have originated in England. Boxing – England played a key role in the evolution of modern boxing. Boxing was first accepted as an Olympic sport in Ancient Greece in 688 BC Golf – Modern game invented in Scotland Billiards Badminton Darts – a traditional pub game, the numbering layout was devised by Brian Gamlin Table-Tennis – was invented on the dinner tables of Britain as an indoor version of tennis Snooker – Invented by the British Army in India Ping pong – The game has its origins in England, in the 1880s Bowls – has been traced to 13th century England Field hockey – the modern game grew from English public schools in the early 19th century Netball – the sport emerged from early versions of women's basketball, at Madame Österberg's College in England during the late 1890s. Rounders – the game originates in England most likely from an older game known as stool ball The Oxford and Cambridge Boat Race, the first race was in 1829 on the River Thames in London Thoroughbred Horseracing – Was first developed in 17th and 18th century England Polo – its roots began in Persia as a training game for cavalry units, the formal codification of the rules of modern Polo as a sport were established in 19th century England The format of Modern Olympics – William Penny Brookes The first Paralympic games competition were held in England in 1948 – Ludwig Guttmann Hawk-Eye ball tracking system. Transport Pedal driven bicycle - Kirkpatrick Macmillan Aviation Aeronautics and flight. As a pioneer of glider development & first well-documented human flight he discovered and identified the four aerodynamic forces of flight – weight, lift, drag, and thrust. Modern airplane design is based on those discoveries including cambered wings. He is sometimes called the "Father of aviation" – George Cayley Steam-powered flight with the Aerial Steam Carriage – John Stringfellow – The world's first powered flight took place at Chard in Somerset 55 years before the Wright brothers attempt at Kitty Hawk VTOL (vertical take-off and landing) fighter-bomber aircraft – Hawker P.1127, designed by Sydney Camm The first commercial jet airliner (de Havilland Comet) The first Supersonic Airliner – Concorde. Developed by the British Aircraft Corporation in partnership with Aérospatiale 1969 The first aircraft capable of supercruise – English Electric Lightning Ailerons – Matthew Piers Watt Boulton Head-up display (HUD) – The Royal Aircraft Establishment (RAE) designed the first equipment and it was built by Cintel with the system first integrated into the Blackburn Buccaneer. Pioneer of parachute design – Robert Cocking The first human-powered aircraft to make an officially authenticated take-off and flight (SUMPAC) – The University of Southampton Hale rockets, improved version of the Congreve rocket design that introduced Thrust vectoring – William Hale SABRE engine- The first hypersonic jet/rocket capable of working in air and space to allow the possibility of HOTOL. Air Force – Royal Air Force Railways Great Western Railway – Isambard Kingdom Brunel Stockton and Darlington Railway the world's first operational steam passenger railway First inter-city steam-powered railway – Liverpool and Manchester Railway Locomotives Blücher – George Stephenson Puffing Billy -William Hedley Locomotion No 1 – Robert Stephenson Sans Pareil – Timothy Hackworth Stourbridge Lion – Foster, Rastrick and Company Stephenson's Rocket – George and Robert Stephenson Salamanca – Matthew Murray Flying Scotsman- Sir Nigel Gresley Other railway developments Displacement lubricator, Ramsbottom safety valve, the water trough, the split piston ring – John Ramsbottom Maglev (transport) rail system – Eric Laithwaite World's first underground railway and the first rapid transit system. It was also the first underground railway to operate electric trains – London Underground Advanced Passenger Train (APT) was an experimental High Speed Train that introduced tilting – British Rail Roads Bowden cable – Frank Bowden Hansom cab – Joseph Hansom Seat belt – George Cayley Sinclair C5 – Sir Clive Sinclair Tarmac – E. Purnell Hooley Tension-spoke wire wheels – George Cayley LGOC B-type – the first mass-produced bus Pneumatic tyre – Robert William Thomson is deemed to be inventor, despite John Boyd Dunlop being initially credited Disc brakes – Frederick W. Lanchester Belisha beacon – Leslie Hore-Belisha Lotus 25: considered the first modern F1 race car, designed for the 1962 Formula One season; a revolutionary design, the first fully stressed monocoque chassis to appear in Formula One – Colin Chapman, Team Lotus Horstmann suspension, tracked armoured fighting vehicle suspension – Sidney Horstmann Steam fire engine – John Braithwaite Penny-farthing – James Starley Dynasphere – John Archibald Purves Caterpillar track – Richard Lovell Edgeworth Mini-roundabout – Frank Blackmore Quadbike – Standard Motor Company patented the 'Jungle Airborne Buggy' (JAB) in 1944 Sea Plimsoll Line – Samuel Plimsoll Hovercraft – Christopher Cockerell Lifeboat – Lionel Lukin Resurgam – George Garrett Transit (ship) – Richard Hall Gower Turbinia, the first steam turbine powered steamship, designed by the engineer Sir Charles Algernon Parsons and built in Newcastle upon Tyne Diving Equipment/Scuba Gear – Henry Fleuss Diving bell – Edmund Halley Sextant – John Bird Octant (instrument) – Independently developed by Englishman John Hadley and the American Thomas Godfrey Whirling speculum, This device can be seen as a precursor to the gyroscope – John Serson Screw propeller – Francis Pettit Smith The world's first patent for an underwater echo ranging device (Sonar) – Lewis Fry Richardson hydrophone Before the invention of Sonar convoy escort ships used them to detect U-boats, greatly lessening the effectiveness of the submarine – Research headed by Ernest Rutherford Hydrofoil – John Isaac Thornycroft Inflatable boat The world's first iron armoured and iron hulled warship. Scientific innovations The theory of electromagnetism – James Clerk Maxwell (1831–1879) The Gregorian telescope – James Gregory (1638–1675) The concept of latent heat – Joseph Black (1728–1799) The pyroscope, atmometer and aethrioscope scientific instruments: Sir John Leslie (1766–1832) Identifying the nucleus in living cells – Robert Brown (1773–1858) Hypnotism – James Braid (1795–1860) Transplant rejection: Professor Thomas Gibson (1940s) the first medical doctor to understand the relationship between donor graft tissue and host tissue rejection and tissue transplantation by his work on aviation burns victims during World War II. Colloid chemistry – Thomas Graham (1805–1869) The kelvin SI unit of temperature – William Thomson, Lord Kelvin (1824–1907) Devising the diagramatic system of representing chemical bonds – Alexander Crum Brown (1838–1922) Criminal fingerprinting – Henry Faulds (1843–1930) The noble gases: Sir William Ramsay (1852–1916) The Cloud chamber – Charles Thomson Rees Wilson (1869–1959) Pioneering work on nutrition and poverty – John Boyd Orr (1880–1971) The ultrasound scanner – Ian Donald (1910–1987) Ferrocene synthetic substances – Peter Ludwig Pauson in 1955 The MRI body scanner – John Mallard and James Huchinson from (1974–1980) The first cloned mammal (Dolly the Sheep): Was conducted in The Roslin Institute research centre in 1996 Seismometer innovations thereof – James David Forbes Metaflex fabric innovations thereof – University of St. Andrews (2010) application of the first manufacturing fabrics that manipulate light in bending it around a subject. Before this such light manipulating atoms were fixed on flat hard surfaces. The team at St Andrews are the first to develop the concept to fabric. Macaulayite: Dr Jeff Wilson of the Macaulay Institute, Aberdeen. Miscellaneous Oldest police force in continuous operation: Marine Police Force founded in 1798 and now part of the Metropolitan Police Service Oldest life insurance company in the world: Amicable Society for a Perpetual Assurance Office founded 1706 First Glee Club, founded in Harrow School in 1787. Oldest arts festival – Norwich 1772 Oldest music festival – The Three Choirs Festival Oldest literary festival – The Cheltenham Literature Festival Bayko – Charles Plimpton Linoleum – Frederick Walton Chocolate bar – J. S. Fry & Sons Meccano – Frank Hornby Crossword puzzle – Arthur Wynne Gas mask – (disputed) John Tyndall and others Graphic telescope – Cornelius Varley Steel-ribbed Umbrella – Samuel Fox Plastic – Alexander Parkes Plasticine – William Harbutt Carbonated soft drink – Joseph Priestley Friction Match – John Walker Invented the rubber balloon – Michael Faraday The proposal of a new decimal metrology which predated the Metric system – John Wilkins Edmondson railway ticket – Thomas Edmondson The world's first Nature Reserve – Charles Waterton *Public Park – Joseph Paxton Scouts – Robert Baden-Powell, 1st Baron Baden-Powell Spirograph – Denys Fisher The Young Men's Christian Association YMCA was founded in London – George Williams The Salvation Army, known for being one of the largest distributors of humanitarian aid – Methodist minister William Booth Prime meridian – George Biddell Airy Produced the first complete printed translation of the Bible into English – Myles Coverdale Founder of the Bank of Scotland – John Holland Venn diagram – John Venn Vulcanisation of rubber – Thomas Hancock Silicone – Frederick Kipping Pykrete – Geoffrey Pyke Vantablack – The world's blackest known substance Stamp collecting – John Edward Gray bought penny blacks on first day of issue in order to keep them lorgnette – George Adams Boys' Brigade Bank of England devised by William Paterson Bank of France devised by John Law Colour photography: the first known permanent colour photograph was taken by James Clerk Maxwell (1831–1879) Barnardos Boy Scouts Girl Guides RSPCA RSPB RNLI See also Economic history of the United Kingdom List of English inventions and discoveries List of English inventors and designers List of Welsh inventors Manufacturing in the United Kingdom Science and technology in the United Kingdom Science in Medieval Western Europe Scottish inventions and discoveries Timeline of Irish inventions and discoveries References Further reading Invention Inventions Lists of inventions or discoveries Inventions Inventions ms:Perekaan dan penemuan Inggeris
43030873
https://en.wikipedia.org/wiki/Goma%20%28software%29
Goma (software)
Goma is an open-source, parallel, and scalable multiphysics software package for modeling and simulation of real-life physical processes, with a basis in computational fluid dynamics for problems with evolving geometry. It solves problems in all branches of mechanics, including fluids, solids, and thermal analysis. Goma uses advanced numerical methods, focusing on the low-speed flow regime with coupled phenomena for manufacturing and performance applications. It also provides a flexible software development environment for specialty physics. Goma was created by Sandia National Laboratories and is currently supported by both Sandia and the University of New Mexico. Capabilities Goma is a finite element program which solves problems from all branches of mechanics, including fluid mechanics, solid mechanics, chemical reactions and mass transport, and energy transport. The conservation principles for momentum, mass, species, and energy, together with material constitutive relations, can be described by partial differential equations. The equations are made discrete for solution on a digital computer with the finite element method in space and the finite difference method in time. The resulting nonlinear, time-dependent, algebraic equations are solved with a full Newton-Raphson method. The linearized equations are solved with direct or Krylov-based iterative solvers. The simulations can be run on a single processor or on multiple processors in parallel using domain decomposition, which can greatly speed up engineering analysis. Example applications include, but are not limited to, coating and polymer processing flows, super-alloy processing, welding/soldering, electrochemical processes, and solid-network or solution film drying. A full description of Goma's capabilities can be found in Goma's capabilities document. Goma is frequently used in conjunction with other software packages. Cubit is typically used to generate computational meshes, while ParaView is often used to visualize the simulation results. Simulation output is generated in the ExodusII file format. History Goma originated in 1994 from an early version of MP_SALSA, a finite element program designed to simulate chemically reacting flows in massively-parallel computing environments. As a point-of-departure, Goma was originally extended and adapted to free and moving boundary problems in fluid mechanics, heat transfer, and mass transfer. Five versions of Goma (1.0 through 5.0) were developed and released by Sandia from 1994 through 2012. These original versions of Goma were not approved for public release, and were released only internally within the US Government and its contracted industrial and academic partners. In 2013, Sandia released Goma 6.0 as open-source software under the GNU General Public License. It is hosted by GitHub and contains instructions on downloading additional software packages that are required to build Goma. Awards Goma 6.0 was awarded a 2014 R&D 100 Award by R&D Magazine. This award identifies the open-source release of Goma 6.0 as one of the top 100 technological innovations of 2013. Publications A user manual for Goma 6.0 has been published openly. Goma simulations have underpinned at least 14 Sandia technical reports and over 25 journal articles. External links Goma hosted on GitHub Goma Website R&D 100 award nomination video References Finite element software Finite element software for Linux Scientific simulation software Sandia National Laboratories
53557296
https://en.wikipedia.org/wiki/ISO/IEC%2090003
ISO/IEC 90003
ISO/IEC 90003 Software engineering -- Guidelines for the application of ISO 9001:2008 to computer software is a guidelines developed for organizations in the application of ISO 9001 to the acquisition, supply, development, operation and maintenance of computer software and related support services. This standard was developed by technical committee ISO/IEC JTC 1/SC 7 Software and systems engineering. ISO/IEC 90003 originally published as ISO 9000-3 for the first time in December 1997, was issued for the first time as an ISO/IEC 90003 in February 2004. The review cycle of ISO 90003 is every 5 years. Main requirements of the standard The ISO/IEC 90003:2014 adopts the ISO structure in 8 chapters in the following breakdown: Scope Normative references Terms and definitions Quality management system Management responsibility Resource management Product realization Measurement, analysis and improvement See also List of ISO standards List of IEC standards International Organization for Standardization References External links ISO/IEC 90003—Software engineering -- Guidelines for the application of ISO 9001 to computer software ISO/IEC JTC 1/SC 7—Software and systems engineering 90003 Software quality
803810
https://en.wikipedia.org/wiki/Software%20license
Software license
A software license is a legal instrument (usually by way of contract law, with or without printed material) governing the use or redistribution of software. Under United States copyright law, all software is copyright protected, in both source code and object code forms, unless that software was developed by the United States Government, in which case it cannot be copyrighted. Authors of copyrighted software can donate their software to the public domain, in which case it is also not covered by copyright and, as a result, cannot be licensed. A typical software license grants the licensee, typically an end-user, permission to use one or more copies of software in ways where such a use would otherwise potentially constitute copyright infringement of the software owner's exclusive rights under copyright. Software licenses and copyright law Most distributed software can be categorized according to its license type (see table). Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free and open-source software (FOSS). The distinct conceptual difference between the two is the granting of rights to modify and re-use a software product obtained by a customer: FOSS software licenses both rights to the customer and therefore bundles the modifiable source code with the software ("open-source"), while proprietary software typically does not license these rights and therefore keeps the source code hidden ("closed source"). In addition to granting rights and imposing restrictions on the use of copyrighted software, software licenses typically contain provisions which allocate liability and responsibility between the parties entering into the license agreement. In enterprise and commercial software transactions, these terms often include limitations of liability, warranties and warranty disclaimers, and indemnity if the software infringes intellectual property rights of anyone. Unlicensed software outside the scope of copyright protection is either public domain software (PD) or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software (not in the public domain) is fully copyright protected, and therefore legally unusable (as no usage rights at all are granted by a license) until it passes into public domain after the copyright term has expired. Examples of this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without a specified license. As voluntarily handing software into the public domain (before reaching the copyright term) is problematic in some jurisdictions (for instance the law of Germany), there are also licenses granting PD-like rights, for instance the CC0 or WTFPL. Ownership vs. licensing Many proprietary or open source software houses sell the software copy with a license to use it. There isn't any transferring of ownership of the good to the user, which hasn't the warranty of a for life availability of the software, nor isn't entitled to sell, rent, give it to someone, copy or redistribute it on the Web. License terms and conditions may specify further legal clauses that users can't negotiate individually or by way of a consumer organization, and can uniquely accept or refuse, returning the product back to the vendor. This right can be effectively applied where the jurisdiction provides a mandatory time for the good decline right after the purchase (as in the European Union law), or a mandatory public advertisement of the license terms, so as to be made readable by users before their purchasing. In the United States, Section 117 of the Copyright Act gives the owner of a particular copy of software the explicit right to use the software with a computer, even if use of the software with a computer requires the making of incidental copies or adaptations (acts which could otherwise potentially constitute copyright infringement). Therefore, the owner of a copy of computer software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, then the end-user may legally use the software without a license from the software publisher. As many proprietary "licenses" only enumerate the rights that the user already has under , and yet proclaim to take rights away from the user, these contracts may lack consideration. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. By doing so, Section 117 does not apply to the end-user and the software publisher may then compel the end-user to accept all of the terms of the license agreement, many of which may be more restrictive than copyright law alone. The form of the relationship determines if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk, Inc. The ownership of digital goods, like software applications and video games, is challenged by "licensed, not sold" EULAs of digital distributors like Steam. In the European Union, the European Court of Justice held that a copyright holder cannot oppose the resale of a digitally sold software, in accordance with the rule of copyright exhaustion on first sale as ownership is transferred, and questions therefore the "licensed, not sold" EULA. The Swiss-based company UsedSoft innovated the resale of business software and fought for this right in court. In Europe, EU Directive 2009/24/EC expressly permits trading used computer programs. Proprietary software licenses The hallmark of proprietary software licenses is that the software publisher grants the use of one or more copies of software under the end-user license agreement (EULA), but ownership of those copies remains with the software publisher (hence use of the term "proprietary"). This feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, such as the number of installations allowed or the terms of distribution. The most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all. One example of such a proprietary software license is the license for Microsoft Windows. As is usually the case with proprietary software licenses, this license contains an extensive list of activities which are restricted, such as: reverse engineering, simultaneous use of the software by multiple users, and publication of benchmarks or performance tests. There are numerous types of licensing models, varying from simple perpetual licenses and floating licenses to more advanced models such as the metered license. The most common licensing models are per single user (named user, client, node) or per user in the appropriate volume discount level, while some manufacturers accumulate existing licenses. These open volume license programs are typically called open license program (OLP), transactional license program (TLP), volume license program (VLP) etc. and are contrary to the contractual license program (CLP), where the customer commits to purchase a certain number of licenses over a fixed period (mostly two years). Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle, which allows the owner of the dongle to use the program on any computer. Licensing per server, CPU or points, regardless the number of users, is common practice, as well as site or company licenses. Sometimes one can choose between perpetual (permanent) and annual license. For perpetual licenses, one year of maintenance is often required, but maintenance (subscription) renewals are discounted. For annual licenses, there is no renewal; a new license must be purchased after expiration. Licensing can be host/client (or guest), mailbox, IP address, domain etc., depending on how the program is used. Additional users are inter alia licensed per extension pack (e.g. up to 99 users), which includes the base pack (e.g. 5 users). Some programs are modular, so one will have to buy a base product before they can use other modules. Software licensing often also includes maintenance. This, usually with a term of one year, is either included or optional, but must often be bought with the software. The maintenance agreement (contract) typically contains a clause that allows the licensee to receive minor updates (V.1.1 => 1.2), and sometimes major updates (V.1.2 => 2.0). This option is usually called update insurance or upgrade assurance. For a major update, the customer has to buy an upgrade, if it is not included in the maintenance agreement. For a maintenance renewal, some manufacturers charge a reinstatement (reinstallment) fee retroactively per month, in the event that the current maintenance has expired. Maintenance sometimes includes technical support. When it does, the level of technical support, which are commonly named gold, silver and bronze, can vary depending on the communication method (i.e. e-mail versus telephone support), availability (e.g. 5x8, 5 days a week, 8 hours a day) and reaction time (e.g. three hours). Support is also licensed per incident as an incident pack (e.g. five support incidents per year). Many manufacturers offer special conditions for schools and government agencies (EDU/GOV license). Migration from another product (crossgrade), even from a different manufacturer (competitive upgrade) is offered. Free and open-source software licenses There are several organizations in the FOSS domain who give out guidelines and definitions regarding software licenses. Free Software Foundation maintains non-exhaustive lists of software licenses following their The Free Software Definition and licenses which the FSF considers non-free for various reasons. The FSF distinguishes additionally between free software licenses that are compatible or incompatible with the FSF license of choice, the copyleft GNU General Public License. The Open Source Initiative defines a list of certified open-source licenses following their The Open Source Definition. Also the Debian project has a list of licenses which follow their Debian Free Software Guidelines. Free and open-source licenses are commonly classified into two categories: Those with the aim to have minimal requirements about how the software can be redistributed (permissive licenses), and the protective share-alike (copyleft Licenses). An example of a copyleft free software license is the often used GNU General Public License (GPL), also the first copyleft license. This license is aimed at giving and protecting all users unlimited freedom to use, study, and privately modify the software, and if the user adheres to the terms and conditions of the GPL, freedom to redistribute the software or any modifications to it. For instance, any modifications made and redistributed by the end-user must include the source code for these, and the license of any derivative work must not put any additional restrictions beyond what the GPL allows. Examples of permissive free software licenses are the BSD license and the MIT license, which give unlimited permission to use, study, and privately modify the software, and includes only minimal requirements on redistribution. This gives a user the permission to take the code and use it as part of closed-source software or software released under a proprietary software license. It was under debate some time if public domain software and public domain-like licenses can be considered as a kind of FOSS license. Around 2004 lawyer Lawrence Rosen argued in the essay "Why the public domain isn't a license" software could not truly be waived into public domain and can't therefore be interpreted as very permissive FOSS license, a position which faced opposition by Daniel J. Bernstein and others. In 2012 the dispute was finally resolved when Rosen accepted the CC0 as an open source license, while admitting that contrary to his previous claims, copyright can be waived away, backed by Ninth circuit decisions. See also Comparison of free and open-source software licenses Digital rights management Copy protection Dual-licensing Index of Articles Relating to Terms of Service and Privacy Policies License-free software License manager Product activation Product key Rights Expression Language Software metering Terms of service Perpetual access Copyright licenses (category) Software by license (category) References External links Software licensing for a small ISV and the issue of open source by Dan Bricklin by Jon Gillespie-Brown at knol.google.de knol.google.com "Relationships between different types of licenses Free and Non-Free". Free Software Foundation "Various Licenses and Comments about Them". Free Software Foundation. Open Source and Freeware best practices The Challenges of Licensing The Knowledge Net of Software Licensing on omtco.eu Terms of service
2527411
https://en.wikipedia.org/wiki/Alfresco%20Software
Alfresco Software
Alfresco is a collection of information management software products for Microsoft Windows and Unix-like operating systems developed by Alfresco Software Inc. using Java technology. Their primary software offering, branded as a Digital Business Platform is proprietary & a commercially licensed open source platform, supports open standards, and provides enterprise scale. Alfresco Software Inc. also provides open source Community Editions as free, LGPLv3 licensed open source software. These have some default restrictions in terms of scalability and availability, e.g. there is no built-in clustering support. Quality assurance by Alfresco is limited and bug fixes are only issued for the current versions. There is a community support for the Community Edition including an independent association, the Order of the Bee. History John Newton (co-founder of Documentum) and John Powell (a former COO of Business Objects) founded Alfresco Software, Inc. in 2005. In July 2005, Alfresco released the first version of their software. While Alfresco's product initially focused on document management, in May, 2006, the company announced its intention to expand into web content management by acquiring senior technical and managerial staff from Interwoven; this included its VP of Web Content Management, two principal engineers, and a member of its user-interface team. In October 2009, the 2009 Open Source CMS Market Share Report described Alfresco as a leading Java-based open source web content management system. In 2010, Alfresco sponsored a new open-source BPM engine called Activiti. In July 2011, Alfresco and Ephesoft announced a technology partnership to offer document capture and Content Management Interoperability Services brought together for intelligent PDF capture and search and workflow development. In October 2011, Alfresco 4.0 was released with improvements to the user interface. The new Alfresco moved additional features from Alfresco Explorer to Alfresco Share, as Alfresco Explorer is intended to be deprecated over time. In January 2013, Alfresco appointed Doug Dennerline, former President of SuccessFactors, former EVP of Sales at Salesforce.com, and former CEO of WebEx, as its new CEO. In September 2014, Alfresco 5 was released with new reporting and analytics features and an overhaul of its document search tool, moving from Lucene to Solr. In November 2016, Alfresco launched an AWS Quickstart for building an Alfresco Content Services server cluster on the AWS Cloud. In March 2017, Alfresco 5.2 was released and rebranded as the Digital Business Platform. This included the release of the Application Development Framework with reusable Angular JS(2.0) components. On February 8, 2018, it was announced that Alfresco was to be acquired by the private equity firm Thomas H. Lee Partners, L.P. On September 9, 2020, Alfresco was bought by Hyland Software for an undisclosed amount. Products Alfresco's core Digital Business Platform offering consists of three primary products. It is designed for clients who require modularity and scalable performance. It can be deployed on-premises on servers or in the cloud using an Amazon Web Services (AWS) Quick Start. A multi-tenant SaaS offering is also available. Alfresco Content Services (ACS) The enterprise content management (ECM) capabilities that have been a core part of Alfresco's business since its founding. It includes a central content and metadata repository, a web interface named Share, the ability to define automated business rules, and full-text indexing provided using Apache Solr. Alfresco Process Services (APS) The business process management (BPM) capabilities stemming from the open source Activiti project. It includes graphical design tools, business rules editors, and data integration to external business systems. Alfresco Governance Services (AGS) Formerly known as Alfresco Records Management, AGS is an add-on software component that provides records management functionality to address information governance requirements. Alfresco Governance Services is DoD 5015.02 certified for records management. Alfresco Community Edition The open source community edition of Alfresco Content Services. Activiti Activiti is a separate open source product that is the community edition of Alfresco Process Services. Usage Enterprise content management for documents, web, records, images, videos, rich media, and collaborative content development. In 2019 it implemented a programme to enable George Eliot Hospital NHS Trust to become paperless. See also List of content management systems List of collaborative software List of applications with iCalendar support Cloud collaboration Document collaboration Document-centric collaboration References External links Official website Alfresco Hub - Forums (Community) Alfresco Content Services on AWS (Amazon Quickstart) Activiti Software Website Alfresco Software External Project Repositories on GitHub Free content management systems Document management systems Free software programmed in Java (programming language) Free business software Digital library software Software companies of the United States
1459562
https://en.wikipedia.org/wiki/Biomedical%20Informatics%20Research%20Network
Biomedical Informatics Research Network
The Biomedical Informatics Research Network, commonly referred among analysts as “BIRN” is a national proposed project to assist biomedical researchers in their bioscience investigations through data sharing and online collaborations. BIRN provides data-sharing infrastructure, advisory services from a single source and software tools and techniques. This national initiative is funded by NIH Grants, the National Center for Research Resources and the National Institute of General Medical Sciences (NIGMS), a component of the United States National Institutes of Health (NIH). Overview To serve the Biomedical community, BIRN is designed to share significant and intensive data between researchers across geographic distance using user driven base software. Participants can transfer data securely and privately, internal and external. All data transfer is designed to be consistent with Health Insurance Portability and Accountability Act of 1996 (HIPAA) privacy and security guidelines. BIRN also offers documented best practices, expert advice, data-sharing, and query and analysis software tools specific to biomedical research. Its researchers develop authorization capabilities and new data-sharing and engineering tools to assist researchers in making sense of new information. Structure BIRN is a collaborative effort between the NIGMS and a variety of nationwide leadership associations: Information Sciences Institute (ISI) at the University of Southern California, University of Chicago, Massachusetts General Hospital, University of California at Irvine, and the University of California at Los Angeles. Its interdisciplinary team consists of computer scientists, engineers, physicians, biomedical researchers and other technical experts, including grid computing developers Carl Kesselman of USC ISI, and Ian Foster of Argonne National Laboratories. Co-Principal Investigators are: Carl Kesselman, Ph.D., a professor in the University of Southern California (USC) Daniel J. Epstein Department of Industrial and Systems Engineering, and a Fellow of the Information Sciences Institute (ISI), its highest honor; Ian Foster, Ph.D., director of the Computation Institute, a joint project between the University of Chicago and Argonne National Laboratory, and associate director of Argonne's Mathematics and Computer Science Division; Steven G. Potkin, M.D., a professor in the Department of Psychiatry and Human Behavior at the University of California at Irvine (UCI) and Director of UCI's Brain Imaging Center; Bruce R. Rosen, M.D., Ph.D., a professor of radiology at Harvard Medical School and Health Sciences and Technology at the Harvard-MIT Division of Health Sciences and Technology, and Director of the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital in Boston, MA; Jonathan C. Silverstein, M.D., associate director and senior fellow at the University of Chicago-Argonne National Laboratory Computation Institute, and an associate professor of Surgery, Radiology and Biological Sciences at UC; Arthur Toga, Ph.D., a professor in the University of Southern California. Resources Users range from small research groups to larger researching groups. Like the National consortias such as the Nonhuman Primate Research Consortium (NHPRC) and the Cardiovascular Research Grid (CVRG), both funded by NIH. By using BIRN's capabilities both to access data and perform research, groups can conduct large-scale data analysis while maximizing their existing technical infrastructure and expertise. Users also can participate in BIRN Working Groups that develop and support key functions, operations, security and data-sharing requirements. BIRN offers a website, wiki and mailing lists to help users stay current on up to date news and information. The best practices and topics related directly to their data-sharing considerations. Its experts can help biomedical teams select software, data and metadata community standards; set up security mechanisms and sharing protocols to create multi-institutional policies from a potentially overwhelming range of options. History BIRN was initially built around several “testbeds” or selected projects in neurology research and begun as an NCRR initiative. In 2008, its software expanded significantly to including data-sharing support across the entire biomedical research community. The network, being now open to all biomedical research groups is In belief that BIRN will benefit from its services, regardless of a group's specialty, mandate, size or U.S. location. BIRN's mission also has shifted from having a central place for data to a means of supporting efficient data transfer. As a result, BIRN no longer provides hardware, offers or maintain servers (previously called “racks”) for storing user information, or uses participants’ computers as network interchange. The user-driven, software-based approach instead supports data sharing on participants’ existing hardware and software. Each user group retains control over, and responsibility for, its own hardware—and for the security and privacy of its own information. Data is stored on users' systems rather than in a central repository, making possible storage of, and access to, vastly greater data quantities than was possible with BIRN “racks” alone. Membership To become members, groups begin by filling out a contact form on the BIRN website. A BIRN team member responds, and if its services appear to be a good match, s/he typically refers questioners to a BIRN member or WG for more in-depth conversations. BIRN seeks to aid universities and institutional based researchers with complex, distributed projects, technologically or geographically. Such as multi-site clinical trials. Working Groups (WGs) evaluate candidate projects based on their unique characteristics and use cases. There are no specific project criteria or required sizes, although WGs may consider factors such as research goals, potential impact, technical challenges, host institution and sponsor funding. WGs typically discuss whether BIRN's capabilities will address the group's data usage requirements, which BIRN tools and areas of expertise would fit best, and related issues. BIRN strongly encourages inquiries from biomedical research groups nationwide. Among the characteristics of groups likely to get the most out of BIRN: the need to exchange data between multiple sites on an ongoing basis, not just from one site to another or for a one-time-only project, and/or to make data from multiple sites publicly available. On a social level, BIRN looks for groups that understand users’ data-sharing problems and can articulate how those issues affect them in day-to-day, real-world ways. Groups aren't expected to be technical wizards, but do need to be able to articulate specific data-sharing needs and problems. BIRN contributes technical expertise, while users provide the knowledge specific to their fields. For instance, BIRN can advise on how to go about defining user needs and requirements, but only users can determine specifically what those factors should be. Because BIRN isn't a plug-and-play, off-the-shelf product, the network seeks prospective users who are committed to conceiving, designing, building and implementing the best solution for their circumstances. References External links BIRN community website BIRN community overview BIRN community frequently asked questions National Institutes of Health Neuroimaging Bioinformatics organizations
36085842
https://en.wikipedia.org/wiki/Free%20license
Free license
A free license or open license is a license agreement which contains provisions that allow other individuals to reuse another creator's work, giving them four major freedoms. Without a special license, these uses are normally prohibited by copyright law or commercial license. Most free licenses are worldwide, royalty-free, non-exclusive, and perpetual (see copyright durations). Free licenses are often the basis of crowdsourcing and crowdfunding projects. The invention of the term "free license" and the focus on the rights of users were connected to the sharing traditions of the hacker culture of the 1970s public domain software ecosystem, the social and political free software movement (since 1980) and the open source movement (since the 1990s). These rights were codified by different groups and organizations for different domains in Free Software Definition, Open Source Definition, Debian Free Software Guidelines, Definition of Free Cultural Works and The Open Definition. These definitions were then transformed into licenses, using the copyright as legal mechanism. Since then, ideas of free/open licenses spread into different spheres of society. Open source, free culture (unified as free and open-source movement), anticopyright, Wikimedia Foundation projects, public domain advocacy groups and pirate parties are connected with free and open licenses. Philosophy Classification and licenses By freedom Agreement, which is related to the public domain Creative Commons CC0 WTFPL Unlicense Public Domain Dedication and License (PDDL) Permissive licenses BSD License MIT License Mozilla Public License (file-based permissive copyleft) Creative Commons Attribution Copyleft licenses GNU GPL, LGPL (weaker copyleft), AGPL (stronger copyleft) Creative Commons Attribution Share-Alike Mozilla Public License Common Development and Distribution License GFDL (without invariant sections) Free Art License By type of content Free software licences The Free Software Definition Open Content Open Content License Open Publication License Free content licenses Definition of Free Cultural Works Open-source hardware licenses Database licenses (Creative Commons v4 and Open Database Licence) Open patent licenses By authors Free Software Foundation Open Source Initiative Creative Commons Microsoft Microsoft Public License Microsoft Reciprocal License Open Content Project Open Data Commons from Open Knowledge Foundation Public Domain Dedication and License (PDDL) Attribution License (ODC-By) Open Database License (ODC-ODbL) Problems License compatibility License proliferation Permissive free software Commons has affiliates in more than 100 jurisdictions all over the world. United States European Union EUPL was created in the European Union. Germany Harald Welte created gpl-violations.org References External links Various Licenses and Comments about Them - GNU Project - Free Software Foundation License information - Debian Open Source Licenses Licenses - Definition of Free Cultural Works proposed Open Source Hardware (OSHW) Statement of Principles and Definition v1.0 Free and open-source software licenses Contract law Databases Computer law Copyright licenses Terms of service Open-source hardware Patent law
65768652
https://en.wikipedia.org/wiki/History%20of%20Delphi%20%28software%29
History of Delphi (software)
This page details the history of the programming language and software product Delphi. Roots and birth Delphi evolved from Borland's Turbo Pascal for Windows, itself an evolution with Windows support from Borland's Turbo Pascal and Borland Pascal with Objects, very fast 16-bit native-code MS-DOS compilers with their own sophisticated integrated development environment (IDE) and textual user interface toolkit for DOS (Turbo Vision). Early Turbo Pascal (for MS-DOS) was written in a dialect of the Pascal programming language; in later versions support for objects was added, and it was named Object Pascal. Delphi was originally one of many codenames of a pre-release development tool project at Borland. Borland developer Danny Thorpe suggested the Delphi codename in reference to the Oracle at Delphi. One of the design goals of the product was to provide database connectivity to programmers as a key feature and a popular database package at the time was Oracle database; hence, "If you want to talk to [the] Oracle, go to Delphi". As development continued towards the first release, the Delphi codename gained popularity among the development team and beta testing group. However, the Borland marketing leadership preferred a functional product name over an iconic name and made preparations to release the product under the name Borland AppBuilder. Shortly before the release of the Borland product in 1995, Novell AppBuilder was released, leaving Borland in need of a new product name. After much debate and many market research surveys, the Delphi codename became the Delphi product name. Early Borland years (1995–2003) Borland Delphi Delphi (later known as Delphi 1) was released in 1995 for the 16-bit Windows 3.1, and was an early example of what became known as Rapid Application Development (RAD) tools. Delphi 1 features included: Visual two-way tools Property Method Event (PME) model TObject, records, component, and owner memory management Visual Component Library (VCL) Runtime Library (RTL) Structured exception handling Data-aware components live at design time Database support via BDE and SQL Links Borland Delphi 2 Delphi 2, released in 1996, supported 32-bit Windows environments and bundled with Delphi 1 to retain 16-bit Windows 3.1 application development. New Quickreport components replacing Borland ReportSmith. Delphi 2 also introduced: Database Grid OLE automation Visual form inheritance Long strings (beyond 255) Borland Delphi 3 Delphi 3, released in 1997, added: New VCL components encapsulating the 4.71 version of Windows Common Controls (such as Rebar and Toolbar) TDataset architecture separated from BDE DLL debugging Code insight technology Component packages, and templates, and integration with COM through interfaces. DecisionCube and Teechart components for statistical graphing WebBroker ActiveForms MIDAS three tier architecture Inprise Delphi 4 Inprise Delphi 4, released in 1998, completely overhauled the editor and became dockable. It was the last version shipped with Delphi 1 for 16-bit programming. New features included: VCL added support for ActionLists anchors and constraints. Method overloading Dynamic arrays High performance database drivers Windows 98 and Microsoft BackOffice support Java interoperability CORBA development Borland Delphi 5 Borland Delphi 5 was released in 1999 and improved upon Delphi 4 by adding: Frames Parallel development Translation capabilities Enhanced integrated debugger XML support ADO database support Reference counting interfaces Borland Delphi 6 Shipped in 2001, Delphi 6 supported both Linux (using the name Kylix) and Windows for the first time and offered a cross-platform alternative to the VCL known as CLX. Delphi 6 also added: The Structure window SOAP web services dbExpress BizSnap, WebSnap, and DataSnap Borland Delphi 7 Delphi 7, released in August 2002, added support for: Web application development Windows XP Themes Used by more Delphi developers than any other single version, Delphi 7 is one of the most successful IDEs created by Borland. Its stability, speed, and low hardware requirements led to active use through 2020. Later Borland years (2003–2008) Borland Delphi 8 Delphi 8 (Borland Developer Studio 2.0), released December 2003, was a .NET-only release that compiled Delphi Object Pascal code into .NET CIL. The IDE changed to a docked interface (called Galileo) similar to Microsoft's Visual Studio.NET. Delphi 8 was highly criticized for its low quality and its inability to create native applications (Win32 API/x86 code). The inability to generate native applications is only applicable to this release; the capability would be restored in the next release. Borland Delphi 2005 The next version, Delphi 2005 (Delphi 9, also Borland Developer Studio 3.0), included the Win32 and .NET development in a single IDE, reiterating Borland's commitment to Win32 developers. Delphi 2005 included: Regained ability to compile native windows applications (*.exe) after being removed in Delphi 8. Design-time manipulation of live data from a database Improved IDE with multiple themes for ... in statement (like C#'s foreach) to the language. Multi-unit namespaces Error insight History tab Function inlining Refactoring Wild-card in uses statements Data Explorer Integrated unit testing Delphi 2005 was widely criticized for its bugs; both Delphi 8 and Delphi 2005 had stability problems when shipped, which were only partially resolved in service packs. CLX support was dropped for new applications from this release onwards. Borland Delphi 2006 In late 2005 Delphi 2006 (Delphi 10, also Borland Developer Studio 4.0) was released combining development of C# and Delphi.NET, Delphi Win32 and C++ (Preview when it was shipped but stabilized in Update 1) into a single IDE. It was much more stable than Delphi 8 or Delphi 2005 when shipped, and improved further with the release of two updates and several hotfixes. Delphi 2006 included: Operator overloading Static methods and properties Designer Guidelines, Form positioner view Live code templates, block completion Line numbers, change bars, sync-edit Code folding and method navigation Debugging Tool-Tips Searchable Tool Palette FastMM memory manager Support for MySQL Unicode support in dbExpress Turbo Delphi and Turbo Delphi for .NET On September 6, 2006, The Developer Tools Group (the working name of the not yet spun off company) of Borland Software Corporation released single-language editions of Borland Developer Studio 2006, bringing back the Turbo name. The Turbo product set included Turbo Delphi for Win32, Turbo Delphi for .NET, Turbo C++, and Turbo C#. There were two variants of each edition: Explorer, a free downloadable flavor, and a Professional flavor, priced at US$899 for new users and US$399 for upgrades, which opened access to thousands of third-party components. Unlike earlier Personal editions of Delphi, Explorer editions could be used for commercial development. Delphi Transfer On February 8, 2006, Borland announced that it was looking for a buyer for its IDE and database line of products, including Delphi, to concentrate on its ALM line. Instead of selling it, Borland transferred the development tools group to an independent, wholly owned subsidiary company named CodeGear on November 14, 2006. Codegear Delphi 2007 Delphi 2007 (Delphi 11), the first version by CodeGear, was released on March 16, 2007. The Win32 personality was released first, before the .NET personality of Delphi 2007 based on .NET Framework 2.0 was released as part of the CodeGear RAD Studio 2007 product. For the first time, Delphi could be downloaded from the internet and activated with a license key. New features included: Support for MSBuild, build events, and build configurations Enhancements to the VCL for Windows Vista dbExpress 4 with connection pooling and delegate drivers CPU viewer windows FastCode enhancements IntraWeb / AJAX support Language support for French, German, and Japanese Delphi 2007 also dropped a few features: C#Builder due to low sales as a result of Visual Studio also offering C#. The Windows Form designer for Delphi .NET because it was based on part of the .NET framework API changed so drastically in .NET 2.0 that updating the IDE would have been a major undertaking. Internationalized versions of Delphi 2007 shipped simultaneously in English, French, German and Japanese. RAD Studio 2007 (code named Highlander), which included .NET and C++Builder development, was released on September 5, 2007. Delphi for PHP The CodeGear era produced an IDE targeting PHP development despite the word "Delphi" in the product name. Delphi for PHP was a VCL-like PHP framework that enabled the same Rapid Application Development methodology for PHP as in ASP.NET Web Form. Versions 1.0 and 2.0 were released in March 2007 and April 2008 respectively. The IDE would later evolve into RadPHP after CodeGear's acquisition by Embarcadero. Embarcadero years (2008–2015) Borland sold CodeGear to Embarcadero Technologies in 2008. Embarcadero retained the CodeGear division created by Borland to identify its tool and database offerings but identified its own database tools under the DatabaseGear name. Codegear Delphi 2009 Delphi 2009 (Delphi 12, code named Tiburón), added many new features: Full Unicode support in VCL and RTL components Generics Anonymous methods for Win32 native development Ribbon controls DataSnap library updates Build configurations Class Explorer PNG support Delphi 2009 dropped support for .NET development, replaced by the Delphi Prism developed by RemObjects Software. Codegear Delphi 2010 Delphi 2010 (code-named Weaver, aka Delphi 14; there was no version 13), was released on August 25, 2009, and is the second Unicode release of Delphi. It included: A new compiler run-time type information (RTTI) system Support for Windows 7 Direct2D canvas Touch screen and gestures Source code formatter Debugger visualizers Thread-specific breakpoints Background compilation Source Code Audits and Metrics The option to also have the old style component palette in the IDE. Embarcadero Delphi XE Delphi XE (aka Delphi 2011, code named Fulcrum), was released on August 30, 2010, and improved upon the development environment and language with: Regular Expression library Subversion integration dbExpress filters, authentication, proxy generation, JavaScript framework, and REST support Indy WebBroker Support for Amazon EC2 and Microsoft Azure Build groups Named Threads in the debugger Command line audits, metrics, and document generation Delphi Starter Edition On January 27, 2011, Embarcadero announced the availability of a new Starter Edition that gives independent developers, students and micro businesses a slightly reduced feature set for a price less than a quarter of that of the next-cheapest version. This Starter edition is based upon Delphi XE with update 1. Embarcadero Delphi XE2 On September 1, 2011, Embarcadero released RAD Studio XE2 (code-named Pulsar), which included Delphi XE2, C++Builder, Embarcadero Prism XE2 (Version 5.0 later upgraded to XE2.5 Version 5.1) which was rebranded from Delphi Prism and RadPHP XE2 (Version 4.0). Delphi XE2 included: Native support for 64-bit Windows (except the starter edition) in addition to the long-supported 32-bit versions, with some backwards compatibility. Applications for 64-bit platforms could be compiled, but not tested or run, on the 32-bit platform. The XE2 IDE cannot debug 64-bit programs on Windows 8 and above. A new library called FireMonkey that supports Windows, Mac OS X and the Apple iPhone, iPod Touch and iPad portable devices. FireMonkey and VCL are not compatible; one or the other must be used, and older VCL applications cannot use Firemonkey unless user interfaces are recreated with FireMonkey forms and controls. Third parties have published information on how to use Firemonkey forms in VCL software, to facilitate gradual migration, but even then VCL and Firemonkey controls cannot be used on the same form. Live Bindings for VCL and FireMonkey VCL Styles Unit scope names Platform Assistant DataSnap connectors for mobile devices, cloud API, HTTPS support, and TCP monitoring dbExpress support for ODBC drivers Deployment manager Embarcadero said that Linux operating system support "is being considered for the roadmap", as is Android, and that they are "committed to ... FireMonkey. ... expect regular and frequent updates to FireMonkey". Pre-2013 versions only supported iOS platform development with Xcode 4.2.1 and lower, OS X version 10.7 and lower, and iOS SDK 4.3 and earlier. Embarcadero Delphi XE3 On September 4, 2012, Embarcadero released RAD Studio XE3, which included Delphi XE3, C++Builder, Embarcadero Prism XE3 (Version 5.2) and HTML5 Builder XE3 (Version 5.0) which was upgraded and rebranded from RadPHP. Delphi XE3 added: Native support for both 32-bit and 64-bit editions of Windows (including Windows 8), Mac OS X with the Firemonkey 2/FM² framework. FMX (FireMonkey) actions, touch/gestures, layouts, and anchors FMX support for bitmap styles FMX audio/video VCL/FMX support for sensor devices FMX location sensor component Virtual keyboard support DirectX 10 support Embarcadero Delphi XE4 On April 22, 2013, Embarcadero released RAD Studio XE4, which included Delphi XE4, and C++Builder but dropped Embarcadero Prism and HTML5 Builder. XE4 included the following changes: Two new compilers for Delphi mobile applications – the Delphi Cross Compiler for the iOS Simulator and the Delphi Cross Compiler for the iOS Devices. These compilers significantly differ from the Win64 desktop compiler as they do not support COM, inline assembly of CPU instructions, and six older string types such as PChar. The new mobile compilers advance the notion of eliminating pointers. The new compilers require an explicit style of marshalling data to and from external APIs and libraries. Delphi XE4 Run-Time Library (RTL) is optimized for 0-based, read-only (immutable) Unicode strings, that cannot be indexed for the purpose of changing their individual characters. The RTL also adds status-bit based exception routines for ARM CPUs that do not generate exception interrupts. iOS styles, retina styles, virtual keyboards, app store deployment manager Mobile form designer Web browser component, motion and orientation sensor components ListView component Platform services and notifications FireDAC universal data access components Interbase IBLite and IBToGO Embarcadero Delphi XE5 On September 12, 2013, Embarcadero released RAD Studio XE5, which included Delphi XE5 and C++Builder. It added: Android support (specifically: ARM v7 devices running Gingerbread (2.3.3–2.3.7), Ice Cream Sandwich (4.0.3–4.0.4) and Jelly Bean (4.1.x, 4.2.x, 4.3.x)) Deployment manager for Android iOS 7 style support REST Services client access and authentication components Embarcadero Delphi XE6 On April 15, 2014, Embarcadero released RAD Studio XE6, which included Delphi XE6 and C++Builder. It allows developers to create natively compiled apps for all platforms for, desktop, mobile, and wearable devices like Google Glass, with a single C++ or Object Pascal (Delphi) codebase. RAD Studio XE6 added: Windows 7 and 8.1 styles Access to Cloud-based RESTful web services FireDAC compatibility with more databases Fully integrated InterBase support Embarcadero Delphi XE7 On September 2, 2014, Embarcadero released RAD Studio XE7, which included Delphi XE7 and C++Builder. Its biggest development enabled Delphi/Object Pascal and C++ developers to extend existing Windows applications and build apps that connect desktop and mobile devices with gadgets, cloud services, and enterprise data and API by compiling FMX projects for both desktop and mobile devices. XE7 also included: IBLite embeddable database for Windows, Mac, Android, and iOS Multi-display support Multi-touch support and gesture changes Full-screen immersive mode for Android Pull-to-refresh feature for TListView on iOS and Android FMX save state feature. Embarcadero Delphi XE8 On April 7, 2015, Embarcadero released RAD Studio XE8, which included Delphi XE8 and C++Builder. XE8 added the following tools: GetIt Package Manager Embarcadero Community toolbar Native presentation of TListView, TSwitch, TMemo, TCalendar, TMultiView, and TEdit on iOS Interactive maps New options for Media Library InputQuery support for masking input fields FireDAC improvements Embarcadero Delphi 10 Seattle On August 31, 2015, Embarcadero released RAD Studio 10 Seattle, which included Delphi and C++Builder. Seattle included: Android Background Services support TBeaconDevice class for turning a supported platform device into a "beacon" FireDAC support for NoSQL MongoDB database FireMonkey controls zOrder support for Windows Support for calling WinRT APIs StyleViewer for Windows 10 Style in Bitmap Style Designer High-DPI awareness and 4k monitor support Update 1 (Delphi 10.0.1) was released November 2015 and added: FMX Grid control for iOS iOS native UI styling New FMX feature demos Platform support for iOS 10 and macOS Sierra Idera years (2015–present), under the Embarcadero brand In October 2015, Embarcadero was purchased by Idera Software. Idera continues to run the developer tools division under the Embarcadero brand. Embarcadero Delphi 10.1 Berlin On April 20, 2016, Embarcadero released RAD Studio 10.1 Berlin, which included Delphi and C++Builder, both generating native code for the 32- and 64-bit Windows platforms, OSX, iOS and Android (ARM, MIPS and X86 processors). Delphi 10.1 Berlin introduced: Windows Desktop Bridge support Android 6.0 support EMS Apache Server support Hint property changes Address book for iOS and Android CalendarView control Delphi 10.1.1 Update 1 Released September 2016, Update 1 added: TGrid support for iOS ControlType toggle for Platform or Render FMX ListView Items Designer FMX Search Filter Deployment of iOS apps to macOS Sierra 50+ Internet of Things packages Delphi 10.1.2 Update 2 Released December 2016, Update 2 included: Windows 10 App Store deployment Quick Edit feature for VCL Form Designer VCL calendar controls that mimic Window RT and provide backwards compatibility Windows 10 styles for VCL and FMX Embarcadero Delphi 10.2 Tokyo On March 22, 2017, Embarcadero released RAD Studio 10.2 Tokyo, adding: 64-bit Linux support, limited to console and non-visual applications. FireDAC Linux support for Linux-capable DBMS MariaDB, MySQL, and SQL Server support, InterBase 2017 included in main installation Firebird support for Direct I/O New VCL controls for Windows 10 Delphi 10.2.1 Update 1 Released August 2017, Update 1 included: Improved QPS (Quality, Performance, Stability) Over 140 fixes to customer reported Quality Portal issues BPL package loading for Windows Creators Update Improved support for latest versions of iOS and XCode TEdit improvements on latest Android, faster controls rendering Parse API for other providers FireDAC improvements for SQL Server, InterBase 2017, ODBC Delphi 10.2.2 Update 2 Released December 2017, Update 2 included: New VCL Controls and Layouts (Panels) Dataset to JSON Mobile platforms QPS RAD Server licensing User Experience improvements (manage platforms, progress bar on loading etc.) FMX QuickEdits Dark IDE Theme Delphi 10.2.3 Update 3 Released March 2018, Update 3 included: Expanded RAD Server/ExtJS support InterBase 2017 included in main installation Mobile Support included in basic package FMX UI Templates Embarcadero Delphi 10.2 Tokyo (Community Edition) On July 18, 2018, Embarcadero released Community Edition for free download. You are not allowed to earn more than $5,000. Library source code and VCL/FMX components are more limited compared to Professional. Embarcadero Delphi 10.3 Rio On November 21, 2018, Embarcadero released RAD Studio 10.3 Rio. This release had a lot of improvements, including: New Delphi language features – inline block-local variable declarations and type inference FireMonkey Android zOrder, native controls, and API Level 26 Windows 10 VCL and High DPI improvements RAD Server architecture extension and Docker support Android push notification Delphi 10.3.1 Update 1 Released February 2019, Update 1 included: Expanded support for iOS 12 and iPhone X series devices RAD Server Console UI redesign and migration to the Ext JS framework Improved FireDAC support for Firebird 3.0.4 and Firebird embedded New VCL and FMX Multi-Device Styles IDE Productivity Components Quality improvements to over 150 customer reported issues Delphi 10.3.2 Update 2 Released July 2019, Update 2 and included: Delphi macOS 64-bit RAD Server Wizards and Deployment Improvements Android Push Notification Support with Firebase Delphi Linux FireMonkey GUI Application Support Delphi Android 64-bit support macOS Catalina (Delphi) and iOS 13 support RAD Server Docker support Delphi 10.3.3 Update 3 Released November 2019, Update 3 included: Delphi Android 64-bit support Delphi iOS 13 and macOS Catalina support RAD Server Docker deployment Improved App Tethering stability Improved iOS push notification support Debugger improvements Embarcadero Delphi 10.4 Sydney On May 26, 2020, Embarcadero released RAD Studio 10.4 Sydney with new features such as: Major Delphi Code Insight improvements Unified Memory Management across all supported platforms Enhanced Delphi multi-device platform support Unified installer for online and offline installations Windows Server 2019 support Parallel programming component updates Metal API support on OS X and IOS. See full list of changes Delphi 10.4.1 Update 1 Released September 2020, Update 1 included: 850+ enhancements and fixes Windows Server 2019 support Multi-monitor and 4k scaling improvements Parallel programming component updates Delphi 10.4.2 Update 2 Released February 24, 2021, Update 2 included: New VCL controls: TControlList and TNumberBox MSIX app packaging format support Installer supports silent, automated installations Enhanced Migration Tool Major compiler/IDE speed increases (over 30 IDE fix pack integrations) Android 11, macOS11, iOS 14 support Embarcadero Delphi 11 Alexandria On September 9, 2021, Embarcadero released RAD Studio 11 Alexandria with new features including: High-DPI enabled IDE VCL styles in the form designer FireMonkey design guidelines macOS ARM 64-bit target platform Android API 30 support References External links Delphi Fandom Page Delphi Version Release Dates Pascal (programming language) Pascal (programming language) software Software version histories History of software
1040299
https://en.wikipedia.org/wiki/Clive%20Finkelstein
Clive Finkelstein
Clive Finkelstein (born ca. 1939 died 9/12/2021) is an Australian computer scientist, known as the "Father" of information technology engineering. Live and work In 1961 Finkelstein received his Bachelor of Science from the University of New South Wales in Sydney. After graduation Finkelstein started working in the field of database processing for IBM in Australia and in the USA. Back in Australia in 1976 he founded the IT consultancy firm Infocom Australia. In 1972 Finkelstein was elected Fellow of the Australian Computer Society. Finkelstein was a distinguished member of the International Advisory Board of DAMA International (Data Administration Management Association), with John Zachman. In 2008 he was awarded a position in the Pearcey Hall of Fame of the ACS in Australia. From 1976 to 1980 Finkelstein developed the concept of information technology engineering, based on original work carried out by him to bridge from strategic business planning to information systems. He wrote the first publication on information technology engineering: a series of six in-depth articles by the same name published by US Computerworld in May - June 1981. He also co-authored with James Martin the influential Savant Institute Report titled: "Information Engineering", published in Nov 1981. He also wrote a monthly column, "The Enterprise" for DM Review magazine. Clive passed peacefully from Parkinsons Disease in September of 2021 (per phone call with Jill Finkelstein) Selected publications Martin, James, and Clive Finkelstein. Information Engineering Savant, Nov 1981. Finkelstein, Clive. An introduction to Information Engineering: from Strategic Planning to Information Systems. Addison-Wesley Longman Publishing Co., Inc., 1989. Finkelstein, Clive. Information Engineering: Strategic Systems Development. Addison-Wesley Longman Publishing Co., Inc., 1992. Finkelstein, Clive, and Peter Aiken. Building Corporate portals with XML. McGraw-Hill, Inc., 2000. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, First Edition, Artech House, 2006. Hardcover. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, Second Edition, IES, 2011. ebook. Clive Finkelstein. Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies, Third Edition, 2015 ebook - download in PDF from www.ies.aust.com. References External links Clive Finkelstein Home Page and latest books - Australia "Information Engineering, Portals and Data Warehouses" Interview with Clive Finkelstein (Real Video, Windows Media Video, MP3 podcast, running time 10:13) Year of birth missing (living people) Australian computer scientists Database specialists University of New South Wales alumni
34954105
https://en.wikipedia.org/wiki/M.%20Anto%20Peter
M. Anto Peter
M. Anto Peter Ramesh (26 April 1967 – 12 July 2012) was a Tamil Software vendor and technical writer. Peter was born in 1967 in Arumuganeri, Tuticorin district, and later he moved on to Chennai. Primary Schooling in St. Dominic Savio Primary School-Chennai Perambur, High School in Don Bosco Higher secondary school in perambur Chennai, Radio officer course in Ramana Institute Adyar. He was the first batch of students when computer science was introduced in Polytechnic and obviously completed his Diploma in Computer Science and further developed himself with a Degree in Mathematics and a Master's degree in Business Administration specialising in Marketing. He was one of the well-known computer professionals in India who started many missions to spread the computer education to Indian youths at his late 20s. He was the Managing Director of Softview Media College through which he educated the students in the field of Multimedia, Print Media, Graphics, Animation and also develops Tamil fonts and Tamil typing Software. Features Anto Peter was the first to start introducing Multimedia training and has conducted more than 500 seminars on job opportunities in the field of multimedia and gives free consultancy for setting up a small-scale entrepreneurship. Peter was the managing director of the institution where students are given training in multimedia, print media, graphics, animation and development of Tamil fonts and Tamil typing software. He has conducted more than 500 seminars on job opportunities in the field of multimedia. People close to him said he used to give even free guidance to young entrepreneurs. Anto Peter was the member of Tamil Software Development Fund and Semmozch Conference, Semozchi Digital Library, Board of Studies in the University of Madras and 12 five-year plan committee which was represented by the Government of Tamil Nadu. In addition, he was the Governing Council Member of Tamil Valarchi Kazhagam for the year 2007-08. He died on 12 July 2012 at 3:00 am [Indian Standard Time], of a heart attack. He has actively participated in all worldwide Tamil computing related Conferences and has submitted around 26 Research papers on Tamil computing development. He was the first to start a Tamil E-zine named Tamilcinema.com, as early as 1997 which still today is very popular among the Tamil community around the globe. This was the only E-zine which was advertised in all the media as it was the first of its kind when the other entertainment news providers were still publishing the news in paper format. Books Written in English Macromedia Hand Book Multimedia Hand Book Keyboard Shortcuts Written in Tamil Computer Virus Adoble Premiere Desk Top Publishing Corel Draw Tamil 99 Brailee Hand Book What we can learn in Computer? Mobile Phone Terminology Learn Computer Learn Computer Tamil Typing Multimedia Q & A Computer Job Ready? Internet Guide Computer Q & A Information Technology Terminology Tamil & Computer Learn Internet within 24 Hours Computer related studies! Basics of Multimedia Graphics & Animation Computer related small business Honours Anto Peter holds the many posts and EC membership for many associations and Membership in State and Central Government Associations that are as follows: Kani Tamizh Sangam – President Tamil Heritage Foundation - Secretary Tamil Software Development Fund - Member represented by Tamil Nadu Govt. Semmozhi Conference - Member represented by Tamil Nadu Govt. Semmozhi - Digital Library - Mysore - Member represented by India Govt. 12th Five Year plan committee - Member represented by Tamil Nadu Govt. Member of Board of studies - University of Madras - Tamil Nadu Govt. 16 bit Unicode Standardisation - Member represented by Tamil Nadu Govt. U.Ve.Sa. Library - Governing Council Member Tamil Valarchi Kazhagam - Governing Council Member (2007–2008) Awards He has won several Government Awards for his books in Information Technology which are as follows: Tamizhum Kaniporium (Tamil and Computer) Best book government award in 2004. Best Author Award in 2007 at the Neyveli Book Fair subsidised by Central Government. Bharathi Literarian award 2010 by Shiram group. Periyar 2012 Award by Periyar Muthamizh mandram. References 1967 births 2012 deaths Indian technology writers Tamil writers Tamil-language writers Don Bosco schools alumni
40059225
https://en.wikipedia.org/wiki/Chromecast
Chromecast
Chromecast is a line of digital media players developed by Google. The devices, designed as small dongles, can play Internet-streamed audio-visual content on a high-definition television or home audio system. The user controls playback with a mobile device or personal computer through mobile and web apps that support the Google Cast protocol, or by issuing commands via Google Assistant. Alternatively, content can be mirrored from the Google Chrome web browser on a personal computer or from the screen of some Android devices. The first-generation Chromecast, a video streaming device, was announced on July 24, 2013, and made available for purchase on the same day in the United States for . The second-generation Chromecast and an audio-only model called Chromecast Audio were released in September 2015. A model called Chromecast Ultra that supports 4K resolution and high dynamic range was released in November 2016. A third generation of the HD video Chromecast was released in October 2018. The latest model, called Chromecast with Google TV, was released in September 2020 and is the first in the product line to feature an interactive user interface and remote control. Critics praised the Chromecast's simplicity and potential for future app support. The Google Cast SDK was released on February 3, 2014, allowing third parties to modify their software to work with Chromecast and other Cast receivers. According to Google, over 20,000 Google Cast–ready apps are available, as of May 2015. Over 30 million units have sold globally since launch, making the Chromecast the best-selling streaming device in the United States in 2014, according to NPD Group. From Chromecast's launch to May 2015, it handled more than 1.5 billion stream requests. Development According to Google, the Chromecast was originally conceived by engineer Majd Bakar. His inspiration for the product came around 2008 after noticing the film-viewing tendencies of his wife Carla Hindie. Using her laptop, she would search for a film to watch on a streaming service and add it to her queue, before closing her laptop and using a gaming device to play the film on a television. She took these steps because she found television interfaces difficult to use to search for content. Bakar found the whole process inefficient and wanted to build a phone-based interface that would allow video to play on a large display through a small hardware device. After joining Google in 2011 to work on products that "would change how people used their TVs", Bakar pitched the idea for the Chromecast. Development on the product began in 2012; late that year, Bakar brought home a beta version of the product for Hindie to test. The device was launched in July 2013. Features and operation Chromecast offers two methods to stream content: the first employs mobile and web apps that support the Google Cast technology; the second allows mirroring of content from the web browser Google Chrome running on a personal computer, as well as content displayed on some Android devices. In both cases, playback is initiated through the "cast" button on the sender device. When no content is streamed, video-capable Chromecasts display a user-personalizable content feed called "Backdrop" that can include featured and personal photos, artwork, weather, satellite images, weather forecasts, and news. If a television's HDMI ports support the Consumer Electronics Control (CEC) feature, pressing the cast button will also result in the video-capable Chromecast automatically turning on the TV and switching the television's active audio/video input using the CEC command "One Touch Playback". Hardware and design Chromecast devices are dongles that are powered by connecting the device to an external power adapter or USB port using a USB cable. Video-capable Chromecasts plug into the HDMI port of a high-definition television or monitor, while the audio-only model outputs sound through its integrated 3.5 millimeter audio jack/mini-TOSLINK socket. By default, Chromecasts connect to the Internet through a Wi-Fi connection to the user's local network. A standalone USB power supply with an Ethernet port allows for a wired Internet connection; the power adapter for early Chromecast models was first introduced in July 2015 for US$15, while the adapter for Chromecast with Google TV was released in October 2020 for US$20. First generation The original Chromecast measures in length and has an HDMI plug built into the body. It contains the Marvell Armada 1500-mini 88DE3005 system on a chip (SoC) running an ARM Cortex-A9 processor. The SoC includes codecs for hardware decoding of the VP8 and H.264 video compression formats. Radio communication is handled by AzureWave NH–387 Wi-Fi which implements 802.11 b/g/n (2.4 GHz). The device has 512 MB of Micron DDR3L RAM and 2 GB of flash storage. The model number H2G2-42 is likely a reference to The Hitchhiker's Guide to the Galaxy abbreviation "H2G2"—in the novel, the number 42 is the "Answer to the Ultimate Question of Life, the Universe, and Everything." The bundled power adapter bears the model number MST3K-US, a reference to the television series Mystery Science Theater 3000. Second generation The second-generation Chromecast has a disc-shaped body with a short length of HDMI cable attached (as opposed to the HDMI plug built into the original model). The cable is flexible and the plug can magnetically attach to the device body for more positioning options behind a television. The second-generation model uses a Marvell Armada 1500 Mini Plus 88DE3006 SoC, which has dual ARM Cortex-A7 processors running at 1.2 GHz. The unit contains an Avastar 88W8887, which has improved Wi-Fi performance and offers support for 802.11 ac and 5 GHz bands, while containing three adaptive antennas for better connections to home routers. The device contains 512 MB of Samsung DDR3L RAM and 256 MB of flash storage. The model number NC2-6A5 may be a reference to the registry number "NCC-1701" of the fictional starship USS Enterprise from the Star Trek franchise, the "saucer section" of which the device resembles: NC2 can be read as NCC, and 6A5 converted from hexadecimal is 1701. Chromecast Audio Chromecast Audio is a variation of the second-generation Chromecast designed for use with audio streaming apps. Chromecast Audio features a 3.5 millimeter audio jack/mini-TOSLINK socket, allowing the device to be attached to speakers and home audio systems. One side of the device is inscribed with circular grooves, resembling those of a vinyl record. A December 2015 update introduced support for high-resolution audio (24-bit/96 kHz) and multi-room playback; users can simultaneously play audio across multiple Chromecast Audio devices in different locations by grouping them together using the Google Home mobile app. The feature made Chromecast Audio a low-cost alternative to Sonos' multiple-room music systems. With the advent of Google Home smart speakers, the device became tangential to Google's product strategy and was discontinued in January 2019. In addition, the third-generation Chromecast supports Chromecast Audio technology, allowing it to be paired with other devices for multi-room synchronized playback. The model number RUX-J42 may have been a reference to the Jimi Hendrix albums Are You Experienced (stylized "R U eXperienced") and Midnight Lightning, which had the internal code J-42. Chromecast Audio was also developed with the internal codename Hendrix. Chromecast Ultra Chromecast Ultra is similar in design to the second-generation model, but features upgraded hardware that supports the streaming of 4K resolution content, as well as high-dynamic range (HDR) through the HDR10 and Dolby Vision formats. Google stated that the Chromecast Ultra loads video 1.8 times faster than previous models. Unlike previous models that could be powered through a USB port, the Chromecast Ultra requires the use of the included power supply for connecting to a wall outlet. The power supply also offers an Ethernet port for a wired connection to accommodate the fast network speeds needed to stream 4K content. The Chromecast Ultra was one of the first devices to support Google's cloud gaming service Stadia; a Chromecast Ultra was included with a controller in the "Founder's Edition" and "Premiere Edition" bundles for Stadia. Third generation The third-generation Chromecast added 60 frames-per-second playback support at a resolution of 1080p, compared to the second-generation Chromecast's maximum of 720p at the same frame rate. Google said the third-generation Chromecast offered a 15 percent increase in speed over the second-generation model. The magnetic attachment between the dongle body and HDMI plug that was present on prior models was dropped for the third-generation device. Chromecast with Google TV The latest model, called Chromecast with Google TV, runs the Android TV operating system and features a brand new user interface branded "Google TV" that is navigated with an included Bluetooth remote control. The remote has dedicated buttons for opening YouTube and Netflix, as well as a Google Assistant button for initiating voice commands or search queries through the remote's microphone. The remote can be programmed to control the power, volume, and input functions of televisions and soundbars through HDMI-CEC or infrared signals. Like previous models, the Chromecast with Google TV allows content to be cast to it from other devices. It outputs video up to a 4K resolution and supports HDR through the Dolby Vision, HDR10, and HDR10+ formats, while also supporting the audio formats Dolby Digital, Dolby Digital Plus, and Dolby Atmos. Support for Google's Stadia cloud gaming service was added on the device on June 23, 2021. Unlike some previous models that could be powered by a television's USB port, the Chromecast with Google TV requires a power adapter, which connects via USB cable to the dongle's USB-C port. The Chromecast and its remote are available in three different colors: Snow, Sky, and Sunrise. Model comparison Software Google Cast SDK and compatible apps At the time of Chromecast's launch, four compatible apps were available: YouTube and Netflix were supported as Android, iOS, and Chrome web apps; Google Play Music and Google Play Movies & TV were also supported, but originally only as Android apps. Additional Chromecast-enabled apps would require access to the Google Cast software development kit (SDK). The SDK was first released as a preview version on July 24, 2013. Google advised interested developers to use the SDK to create and test Chromecast-enabled apps, but not distribute them. While that admonition remained in force, Chromecast-enabled applications for Hulu Plus and Pandora Radio were released in October 2013, and HBO Go in November. Google opened the SDK to all developers on February 3, 2014. In its introductory documentation and video presentation, Google said the SDK worked with both Chromecast devices and other unnamed "cast receiver devices". Chromecast product manager Rish Chandra said that Google used the intervening time to improve the SDK's reliability and accommodate those developers who sought a quick and easy way to cast a photo to a television without a lot of coding. Over time, many more applications have been updated to support Chromecast. At Google I/O 2014, the company announced that 6,000 registered developers were working on 10,000 Google Cast–ready apps; by the following year's conference, the number of compatible apps had doubled. Google's official list of compatible apps and platforms is available on the Chromecast website. Google has published case studies documenting Chromecast integration by Comedy Central, Just Dance Now, Haystack News and Fitnet. In July 2019, the Amazon Prime Video apps for Android and iOS added Chromecast support, marking the first time Amazon's streaming service supported the device. The move followed a four-year dispute between Google and Amazon in which Amazon stopped selling Chromecast devices and Google pulled YouTube from Amazon Fire TV. The development framework has two components: a sender app based on a vendor's existing Android or iOS mobile app, or desktop Web app, which provides users with content discovery and media controls; and a receiver app, executing in a Chrome browser-like environment resident on the cast receiver device. Both make use of APIs provided by the SDK. Device discovery protocols Chromecast uses the multicast Domain Name System (mDNS) protocol to search for available devices on a Wi-Fi network. Chromecast previously used the Discovery and Launch (DIAL) protocol, which was co-developed by Netflix and YouTube. Operating system At the introductory press conference, Mario Queiroz, Google's VP of Product Management, said that the first-generation Chromecast ran "a simplified version of Chrome OS". Subsequently, a team of hackers reported that the device is "more Android than ChromeOS" and appears to be adapted from software that was embedded in the since-discontinued Google TV platform. As with Chrome OS devices, Chromecast operating system updates are downloaded automatically without notification. Differing from all previous models, the Chromecast with Google TV runs on the Android TV operating system. A modified user interface, branded "Google TV" (unrelated to Google's discontinued smart TV platform), replaces the stock interface of Android TV. The Google TV interface emphasizes content recommendations and discovery across different services and installed apps, compared to the stock Android TV interface that is more focused on navigating between individual installed apps. Google TV is compatible with over 6,500 apps built for Android TV. At launch, over 30 streaming services were integrated with Google TV for use in its content aggregation features. Mobile app Chromecast is managed through the Google Home app, which enables users to set up new devices and configure existing ones (such as specifying which "Ambient Mode" images are shown when no other content is cast). The app manages other Google Cast-supported devices, including the Google Home smart speaker. Originally called simply "Chromecast", the app was released concurrently with the original Chromecast video model and made available for both Android and iOS mobile devices. The app was released outside the US in October 2013. In May 2016, the Chromecast app was renamed Google Cast due to the proliferation of non-Chromecast products that support casting. In October 2016, the Google Cast app was renamed Google Home, the name also given to the company's smart speaker—leaving "Google Cast" as the name of the technology. Release and promotion Google made the first-generation Chromecast available for purchase online in the US on July 24, 2013. To entice consumers, Google initially included a promotion for three months of access to Netflix at no cost with the purchase of a Chromecast. The device quickly sold out on Amazon.com, BestBuy.com, and Google Play, and within 24 hours, the Netflix promotion was ended because of high demand. On March 18, 2014, Google released the Chromecast to 11 new markets, including the UK, Germany and Canada with the BBC iPlayer enabled for UK users. In July 2014, to commemorate the first anniversary of the device's launch, Google announced it would offer their music streaming service, Google Play Music All Access, at no cost for 90 days to Chromecast owners who had not previously used All Access; the service normally costs US$9.99 per month. On December 10, 2014, Chromecast was launched in India through e-commerce marketplace Snapdeal in partnership with Bharti Airtel. That same month, Google offered a promotion whereby anyone purchasing a Chromecast from a participating retailer before December 21 would receive a US$20 credit for the Google Play Store. Google offered a US$6 credit to the Store for all Chromecast owners beginning on February 6, 2015. On September 29, 2015, Google announced the second-generation Chromecast and an audio-only model called Chromecast Audio. Each model was made available for purchase the same day for US$35. Days later, Amazon.com announced that it would ban the sale of Chromecast and Apple TV devices, presumably because they compete with Amazon's own Fire TV and Fire TV Stick. Google discontinued Chromecast Audio in January 2019. On September 30, 2020, Google announced the Chromecast with Google TV during its "Launch Night In" event, though the product was already sold early at some retailers such as Walmart and the Home Depot during the week prior to its announcement. Google offered a promotion whereby anyone who signed up for YouTube TV and paid for one month of the service (a US$65 cost) would receive a Chromecast with Google TV at no cost; the offer was available only in the US to first-time YouTube TV subscribers. Additionally, in December 2020, Google made an offer available to YouTube TV users who had been continuous subscribers since June 2018 that allowed them to redeem a Chromecast with Google TV at no cost. Reception First generation model Nilay Patel of The Verge gave the Chromecast an 8.5/10 score in his review, saying, "The Chromecast is basically an impulse purchase that just happens to be the simplest, cheapest, and best solution for getting a browser window on your TV." Speaking of the adapter's potential, he said, "it seems like the Chromecast might actually deliver on all that potential, but Google still has a lot of work to do." In particular, Patel pointed to Apple's AirPlay protocol as an example of an established competitor with many more features. TechCrunchs review of the device said, "Even with a bug or two rearing its head, the Chromecast is easily worth its $35 pricetag." Gizmodo gave the device a positive review, highlighting the ease of setup and sharing video. In comparing the device to competitors, the review said, "Chromecast isn't Google's version of Apple TV, and it's not trying to be... But Chromecast also costs a third of what those devices do, and has plenty of potential given that its SDK is just a few days old." Michael Gorman of Engadget gave the Chromecast an 84/100 score, writing, "it's a platform that's likely to improve dramatically as more apps start to support the technology." In his comparing the Chromecast to competing devices, Gorman illustrated that it initially had support from fewer multimedia services, but because of its low price and ease of use, he concluded "we can wholeheartedly recommend the Chromecast for anyone who's been looking for an easy, unobtrusive way to put some brains into their dumb TV." Will Greenwald of PC Magazine rated it 4/5, saying, "The Google Chromecast is the least expensive way to access online services on your HDTV", although he noted that "The lack of local playback and limited Chrome integration holds it back in some respects." David Pogue of The New York Times praised the device for its $35 retail price, saying, "It's already a fine price for what this gadget does, and it will seem better and better the more video apps are made to work with it." Pogue noted the limitations of the device's screen mirroring feature and said using only mobile devices as a remote control was not "especially graceful", but he called Chromecast the "smallest, cheapest, simplest way yet to add Internet to your TV". Second generation model Nicole Lee of Engadget rated the second generation Chromecast an 85/100, highlighting the added support for 802.11ac and dual-band Wi-Fi networks and the usefulness of the updated Chromecast mobile app for finding content to cast. She said of the device, "No, it's not that much better than the original, but it still delivers great bang for your buck." David Katzmaier of CNET gave it a 7.9/10 score, calling the new hardware design more practical and praising the Chromecast app's search capabilities. He ultimately preferred other streaming devices with dedicated remotes over the Chromecast for everyday use, but said "for parties, travel and temporary connections, it's worth having a Chromecast in your arsenal". Third generation model In the face of stronger competition from devices such as the Apple TV, Roku or Fire TV, reviewers started to consider the 2018 Chromecast a secondary streaming device. Trusted Reviews considered it a "very minor" upgrade. Tom's Guide said it has almost "nothing to show" to reflect three years of hardware advancement in the streaming space. Chromecast with Google TV Chris Welch of The Verge gave the Chromecast with Google TV an 8.5/10 score, calling it a "big success" that "checks off almost everything important" for a streaming device. Welch praised the remote control and the Google TV interface's emphasis on content discovery, while noting some occasional sluggish performance. He concluded that Google "reinvented the Chromecast as an excellent 4K streamer that's dramatically easier to use — turns out actual menus and a remote really do matter — without losing sight of what made the original great". Sam Rutherford of Gizmodo said the device "instantly catapulted Google to the front of the streaming dongle wars with a $50 device that's smarter and easier to use than pretty much anything else out there". He lauded the remote control and user interface of Google TV, saying that it "feels just a bit more curated, polished, and tweaked to make the process of jumping back into your favorite shows and movies (or discovering new ones) that much faster". Eli Blumenthal of CNET gave the device a 9/10 score and described it as "the search giant's best TV effort yet and one of the best streamers you can buy, period". He praised Google TV's content aggregation and called it an upgrade over the stock Android TV interface. Blumenthal also called the integration with Google Assistant the best part of the Chromecast, despite some quirks with search results for video content. Nicole Lee of Engadget called it "not only the best Chromecast yet, but also one of the most value-packed streaming devices on the market". She complimented the remote control design and the Google TV interface for being "far easier to navigate" than the standard Android TV interface. She also opined that Google TV was better than Amazon's Fire TV at aggregating content from multiple services, and that Google Assistant was "smarter" than Amazon's Alexa for voice commands. Nick Pino of TechRadar rated the device four-and-a-half stars and called it "a revelation – it fixes something that wasn't broken, and improves a nearly perfect technology in a tangible way". He praised the hardware, video and audio format support, and the user interface's ease of use, calling it a "retooled streaming device that... offers a whole new experience that's more user-friendly for folks who are used to using a remote control and an easily navigable interface." Brian X. Chen of The New York Times was surprised by the number of privacy policies the user had to agree to and the number of permissions the user had to grant during the setup process, and he was disappointed with the recommendations given by Google TV. Sales and impact In July 2014, Google announced that in the device's first year on sale, "millions" of units had sold and over 400 million casts had been made. The number of casts surpassed one billion by January 2015, and 1.5 billion by May 2015. The company confirmed that Chromecast was the best-selling media streaming device in the United States in 2014, according to NPD Group. In February 2015, Google Korea announced that about 10 million Chromecasts had been sold globally in 2014. At Google I/O in May 2015, the company announced 17 million units had sold since launch, a figure that reached 20 million by September 2015, 25 million by May 2016, and 30 million by July 2016. According to Strategy Analytics, Chromecast captured more than 35% of the digital streamer market internationally in 2015. As of October 2017, over 55 million Chromecasts and Chromecast built-in devices have been sold. Digital Trends named Chromecast the "Best Product of 2013". In March 2014, Engadget named Chromecast an Editor's Choice winner for "Home Theater Product of the Year" as part of the website's annual awards; for the following year's awards, the website named the device the winner of "Best in Home Entertainment". In July 2015, Google signed a deal with the Television Academy to provide Chromecasts to Emmy Award voters to allow them to view screeners of nominated media. The multi-year agreement will reduce the volume of DVD screeners distributed each year. Chromecast appeared on several lists of technology from the 2010s. Time named it one of the 10 best gadgets of the decade, saying, "It might not be an essential piece of technology in the decade to come, but the Chromecast's influence on streaming media can't be understated." USA Today ranked Chromecast the 7th-best gadget of the 2010s. PC Magazine listed it as one of the "most iconic tech innovations" of the decade, saying, "Google made wireless streaming from mobile devices to the TV as simple as a few taps, all for $35." The Verge ranked it 39th on their list of the gadgets of the 2010s, saying that Chromecast "helped make streaming video a normal part of many households". Security On January 3, 2019, hackers took control of Chromecast devices, stating they were exposing security risks. The hackers claimed to access 70,000 devices through a router setting that makes connected devices viewable to the public. The bug was dubbed CastHack, and was first found in 2014 by the security consultancy firm Bishop Fox and observed again in 2016. See also Comparison of digital media players References External links Official help center List of Chromecast enabled apps Google Home app on the Google Play Store Google Home app on the Apple App Store DIAL Protocol Specification and Registry digital media players Google hardware networking hardware products introduced in 2013 streaming media systems wireless display technologies
59945568
https://en.wikipedia.org/wiki/BitSight
BitSight
BitSight is a cybersecurity ratings company that analyzes companies, government agencies, and educational institutions. It is based in Back Bay, Boston. Security ratings that are delivered by BitSight are used by banks and insurance companies among other organizations. The company rates more than 200,000 organizations with respect to their cybersecurity. History BitSight was founded in 2011 by Nagarjuna Venna and Stephen Boyer and currently has both United States-based and international employees. In 2016, BitSight raised $40 million USD in funding in the month of September. In 2014, BitSight acquired AnubisNetworks, a Portugal-based cybersecurity firm that tracks real-time data threats. By September 2016, BitSight had raised $40 million in a Series C round led by GGV Capital, with participation from Flybridge Capital Partners, Globespan Capital Partners, Menlo Ventures, Shaun McConnon, and the VC divisions of Comcast Ventures, Liberty Global Ventures, and Singtel Innov8. Shaun McConnon stepped down as the CEO of BitSight in July 2017 but remains the executive chairman of the board. The CEO position was filled by Tom Turner in 2017, and then by Stephen Harvey in 2020. In June 2018, BitSight closed $60 million in Series D funding, bringing the company's total funding to $155 million. BitSight's Series D financing was led by Warburg Pincus, with participation from existing investors Menlo Ventures, GGV Capital and Singtel Innov8. In 2018, the company was located in Cambridge but purchased property in order to shift to Back Bay, where BitSight is currently located. Forbes has estimated BitSight's revenue as being US$100 million as of 2018. Services Organizations purchase BitSight's services in order to understand "security risks associated with sharing sensitive data with business partners." As of 2018, BitSight serves clients, including Lowe's, AIG, and Safeway. BitSight assembles models that produce company ratings, which are based on a scale that enables insurers to rule on the ability of businesses to receive coverage. It produces ratings for 200,000 organizations as of 2020. With respect to its services, Amy Feldman of Forbes writes that "Customers pay on a subscription basis with annual fees ranging from a few thousand dollars to analyze a single company to more than $1 million to review thousands of suppliers." Similar to a credit score, BitSight's ratings range from 250 to 900. BitSight released a report "examining ransomware infection across six industry sectors compiled from 18,996 companies." These security ratings included an examination of ransomware infection across the sectors of "Education, Energy/Utilities, Finance, Government, Healthcare and Retail". References External links BitSight Back Bay, Boston Companies based in Boston Computer network security
39635381
https://en.wikipedia.org/wiki/OS%20X%20Mavericks
OS X Mavericks
OS X Mavericks (version 10.9) is the 10th major release of macOS, Apple Inc.'s desktop and server operating system for Macintosh computers. OS X Mavericks was announced on June 10, 2013, at WWDC 2013, and was released on October 22, 2013, worldwide. The update emphasized battery life, Finder improvements, other improvements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to OS X. Mavericks, which was named after the surfing location in Northern California, was the first in the series of OS X releases named for places in Apple's home state; earlier releases used the names of big cats. OS X Mavericks was the first OS X major release to be a free upgrade and the second overall since Mac OS X 10.1 "Puma". It is the final Mac operating system to feature the Lucida Grande typeface as the standard system font since Mac OS X Public Beta in 2000. History Apple announced OS X Mavericks on June 10, 2013, during the company's Apple Worldwide Developers Conference (WWDC) keynote (which also introduced iOS 7, a revised MacBook Air, the sixth-generation AirPort Extreme, the fifth-generation AirPort Time Capsule, and a redesigned Mac Pro). During a keynote on October 22, 2013, Apple announced that the official release of 10.9 on the Mac App Store would be available immediately, and that unlike previous versions of OS X, 10.9 would be available at no charge to all users running Snow Leopard (10.6.8) or later. On October 22, 2013, Apple offered free upgrades for life on OS X and iWork. System requirements OS X Mavericks can run on any Mac that can run OS X Mountain Lion; as with Mountain Lion, 2 GB of RAM, 8 GB of available storage, and OS X 10.6.8 (Snow Leopard) or later are required. Mavericks and later versions are all available for free. The full list of compatible models: iMac (Mid 2007 or later) MacBook (13-inch Aluminum, Late 2008), (13-inch Polycarbonate, Early 2009 or later) MacBook Pro (13-inch, Mid/Late 2007 or later), (15-inch or 17-inch, Mid/Late 2007 or later) MacBook Air (Late 2008 or later) Mac Mini (Early 2009 or later) Mac Pro (Early 2008 or later) Xserve (Early 2009) System features The menu bar and the Dock are available on each display. Additionally, AirPlay compatible displays such as the Apple TV can be used as an external display. Mission Control has been updated to organize and switch between Desktop workspaces independently between multiple displays. OS X Mavericks introduced App Nap, which sleeps apps that are not currently visible. Any app running on Mavericks can be eligible for this feature by default. Compressed Memory is a virtual memory compression system which automatically compresses data from inactive apps when approaching maximum memory capacity. Timer coalescing is a feature that enhances energy efficiency by reducing CPU usage by up to 72 percent. This allows MacBooks to run for longer periods of time and desktop Macs to run cooler. Apple now supports OpenGL 4.1 Core Profile and OpenCL 1.2. Server Message Block version 2 (SMB2) is now the default protocol for sharing files, rather than AFP. This is to increase performance and cross-platform compatibility. Some skeuomorphs, such as the leather texture in Calendar, the legal pad theme of Notes, and the book-like appearance of Contacts, have been removed from the UI. iCloud Keychain stores a user's usernames, passwords and Wi-Fi passwords to allow the user to fill this information into forms when needed. The system has native LinkedIn sharing integration. IPoTB (Internet Protocol over Thunderbolt Bridge) Thunderbolt networking is supported in Mavericks. This feature allows the user to quickly transfer a large amount of data between two Macs. Notification Center allows the user to reply to notifications instantly, allows websites to send notifications, and, when the user wakes up a Mac that was in a sleep state, displays a summary of missed notifications before the machine is unlocked. Some system alerts, such as low battery, removal of drives without ejecting, and a failed Time Machine backup have been moved to Notification Center. The "traffic light" close, minimize, and maximize window buttons have appeared somewhat brighter than Mac OS X Lion and OS X Mountain Lion. App features Finder gets enhancements such as tabs, full-screen support, and document tags. Pinch-to-zoom and swipe-to-navigate-history gestures have been removed, although both are supported anywhere else. The new iBooks application allows the user to read books purchased through the iBooks Store. The app also allows the user to purchase new content from the iBooks Store, and a night mode to make it easier to read in dark environments. The new Maps application allows the user the same functionality as in iOS Maps. The Calendar app has enhancements such as being able to add Facebook events, and an estimate for the travel time to an event. The Safari browser has a significantly enhanced JavaScript performance which Apple claims is faster than Chrome and Firefox. A Top Sites view allows the user to quickly access the most viewed sites by default. However, the user can pin or remove websites from the view. The sidebar now allows the user to view their bookmarks, reading list and shared links. Safari can also auto-generate random passwords and remember them through iCloud Keychain. Other applications found in Mavericks AirPort Utility App Store Archive Utility Audio MIDI Setup Automator Bluetooth File Exchange Boot Camp Assistant Calculator Chess ColorSync Utility Console Contacts Dictionary Digital Color Meter Disk Utility DVD Player FaceTime Font Book Game Center GarageBand (may not be pre-installed) Grab Grapher iMovie (may not be pre-installed) iTunes Image Capture Ink (can only be accessed by connecting a graphics tablet to your Mac) Keychain Access Keynote (may not be pre-installed) Mail Messages Migration Assistant Notes Notification Center Numbers (may not be pre-installed) Pages (may not be pre-installed) Photo Booth Preview QuickTime Player Reminders Script Editor Stickies System Information Terminal TextEdit Time Machine VoiceOver Utility X11/XQuartz (may not be pre-installed) Removed functionality The Open Transport API has been removed. USB syncing of calendar, contacts and other information to iOS devices has been removed, instead requiring the use of iCloud. QuickTime 10 no longer supports many older video codecs and converts them to the ProRes format when opened. Older video codecs cannot be viewed in Quick Look. Apple also removed the ability to sync mobile iCloud Notes if iOS devices were upgraded from iOS 8 to iOS 9, effectively forcing all Mavericks users to update or upgrade their computers. Reception OS X Mavericks has received mixed reviews. One complaint is that Apple removed the local sync services, which forces users to get iCloud to sync iOS devices with the desktop OS. However, this feature has since returned in the 10.9.3 and iTunes 11.2 updates. The Verge stated that OS X Mavericks was “a gentle evolution of the Mac operating system”. Release history See also Aqua (user interface) macOS version history List of Macintosh software References 9 X86-64 operating systems 2013 software Computer-related introductions in 2013
25398334
https://en.wikipedia.org/wiki/Advanced%20Format
Advanced Format
Advanced Format (AF) is any disk sector format used to store data on magnetic disks in hard disk drives (HDDs) that exceeds 512, 520, or 528 bytes per sector, such as the 4096, 4112, 4160, and 4224-byte (4 KB) sectors of an Advanced Format Drive (AFD). Larger sectors enable the integration of stronger error correction algorithms to maintain data integrity at higher storage densities. History The use of long data sectors was suggested in 1998 in a technical paper issued by the National Storage Industry Consortium (NSIC) calling attention to the conflict between continuing increases in areal density and the traditional 512-byte-per-sector format used in hard disk drives. Without revolutionary breakthroughs in magnetic recording system technologies, areal densities, and with them the storage capacities, hard disk drives were projected to stagnate. The storage industry trade organization, International Disk Drive Equipment and Materials Association (IDEMA), responded by organizing the IDEMA Long Data Sector Committee in 2000, where IDEMA and leading hardware and software suppliers collaborated on the definition and development of standards governing long data sectors, including methods by which compatibility with legacy computing components would be supported. In August 2005, Seagate shipped test drives with 1K physical sectors to industry partners for testing. In 2010, industry standards for the first official generation of long data sectors using a configuration of 4096 bytes per sector, or 4K, were completed. All hard drive manufacturers committed to shipping new hard drive platforms for desktop and notebook products with the Advanced Format sector formatting by January 2011. Advanced Format was coined to cover what was expected to become several generations of long-data-sector technologies, and its logo was created to distinguish long-data-sector–based hard disk drives from those using legacy 512-, 520- or 528-byte sectors. Overview Generation-one Advanced Format, 4K sector technology, uses the storage surface media more efficiently by combining data that would have been stored in eight 512-byte sectors into one single sector that is 4096 bytes (4 KB) in length. Key design elements of the traditional 512-byte sector architecture are maintained, specifically, the identification and synchronization marks at the beginning and the error correction coding (ECC) area at the end of the sector. Between the sector header and ECC areas, eight 512-byte sectors are combined, eliminating the need for redundant header areas between each individual chunk of 512-byte data. The Long Data Sector Committee selected the 4K block length for the first generation AF standard for several reasons, including its correspondence to the paging size used by processors and some operating systems as well as its correlation to the size of standard transactions in relational database systems. Format efficiency gains resulting from the 4K sector structure range from 7 to 11 percent in physical platter space. The 4K format provides enough space to expand the ECC field from 50 to 100 bytes to accommodate new ECC algorithms. The enhanced ECC coverage improves the ability to detect and correct processed data errors beyond the 50-byte defect length associated with the 512-byte sector legacy format. The Advanced Format standard employs the same gap, sync and address mark configuration as the traditional 512-byte sector layout, but combines eight 512-byte sectors into one data field. Having a huge number of legacy 512-byte-sector–based hard disk drives shipped up to the middle of 2010, many systems, programs and applications accessing the hard disk drive are designed around the 512-byte-per-sector convention. Early engagement with the Long Data Sector Committee provided the opportunity for component and software suppliers to prepare for the transition to Advanced Format. For example, Windows Vista, Windows 7, Windows Server 2008, and Windows Server 2008 R2 (with certain hotfixes installed) support 512e format drives (but not 4Kn), as do contemporary versions of FreeBSD and Linux. Mac OS X Tiger and onwards can use Advanced Format drives and OS X Mountain Lion 10.8.2 additionally supports encrypting those. Windows 8 and Windows Server 2012 also support 4Kn Advanced Format. Oracle Solaris 10 and 11 support 4Kn and 512e hard disk drives for non-root ZFS file systems, while version 11.1 provides installation and boot support for 512e devices. Categories Among the Advanced Format initiatives undertaken by the Long Data Sector Committee, methods to maintain backward compatibility with legacy computing solutions were also addressed. For this purpose, several categories of Advanced Format devices were created. 512 emulation (512e) Many host computer hardware and software components assume the hard drive is configured around 512-byte sector boundaries. This includes a broad range of items including chipsets, operating systems, database engines, hard drive partitioning and imaging tools, backup and file system utilities as well as a small fraction of other software applications. In order to maintain compatibility with legacy computing components, many hard disk drive suppliers support Advanced Format technologies on the recording media coupled with 512-byte conversion firmware. Hard drives configured with 4096-byte physical sectors with 512-byte firmware are referred to as Advanced Format 512e, or 512 emulation drives. The translation of the 4096-byte physical format to a virtual 512-byte increment is transparent to the entity accessing the hard disk drive. Read and write commands are issued to Advanced Format drives in the same format as legacy drives. However, during the read process, the Advanced Format hard drive loads the entire 4096-byte sector containing the requested 512-byte data into memory located on the drive. The emulation firmware extracts and re-formats the specific data into a 512-byte chunk before sending the data to the host. The entire process typically occurs with little or no degradation in performance. The translation process is more complicated when writing data that is not a multiple of 4K or not aligned to a 4K boundary. In these instances, the hard drive must read the entire 4096-byte sector containing the targeted data into internal memory, integrate the new data into the previously existing data and then rewrite the entire 4096-byte sector onto the disk media. This operation, known as read-modify-write (RMW), can require additional revolution of the magnetic disks, resulting in a perceptible performance impact to the system user. Performance analysis conducted by IDEMA and the hard drive vendors indicates that approximately five to ten percent of all write operations in a typical business PC user environment may be misaligned and a RMW performance penalty incurred. When using Advanced Format drives with legacy operating systems, it is important to realign the disk drive using software provided by the hard disk manufacturer. Disk realignment is necessary to avoid a performance degrading condition known as cluster straddling where a shifted partition causes filesystem clusters to span partial physical disk sectors. Since cluster-to-sector alignment is determined when creating hard drive partitions, the realignment software is used after partitioning the disk. This can help reduce the number of unaligned writes generated by the computing ecosystem. Further activities to make applications ready for the transition to Advanced Format technologies were spearheaded by the Advanced Format Technology Committee (formerly Long Data Sector Committee) and by the hard disk drive manufacturers. 4K native (4Kn) For hard disk drives working in the 4K native mode, there is no emulation layer in place, and the disk media directly exposes its 4096, 4112, 4160, or 4224-byte physical sector size to the system firmware and operating system. That way, the externally visible logical sectors organization of the 4K native drives is directly mapped to their internal physical sectors organization. Since April 2014, enterprise-class 4K native hard disk drives have been available on the market. Readiness of the support for 4 KB logical sectors within operating systems differs among their types, vendors and versions. For example, Microsoft Windows supports 4K native drives since Windows 8 and Windows Server 2012 (both released in 2012) in UEFI. Linux supports 4K native drives since the Linux kernel version 2.6.31 and version 2.17 (released in 2009 and 2010, respectively). The color version of the logo indicating a 4K native drive is somewhat different from the 512e logo, featuring four rounded corners, a blue background, and text "4Kn" at the center of the logo. See also Partition alignment References External links IDEMA: Advanced Format Technology (archived on September 29, 2011) Coughlin Associates: Aligning with the Future of Storage (archived on May 5, 2012) Western Digital: Advanced Format White Paper (September 2018) and its older version (April 2010) Hitachi Global Storage Technologies: Advanced Format Technology Brief The Tech Report: Western Digital brings Advanced Format to Caviar Green Dell: Support: System Image Support for Advanced Format Hard Drives on Dell Business Client Notebooks and Desktops Hard disk drives Solid-state computer storage
4840
https://en.wikipedia.org/wiki/Blitz%20BASIC
Blitz BASIC
Blitz BASIC is the programming language dialect of the first Blitz compilers, devised by New Zealand-based developer Mark Sibly. Being derived from BASIC, Blitz syntax was designed to be easy to pick up for beginners first learning to program. The languages are game-programming oriented but are often found general-purpose enough to be used for most types of application. The Blitz language evolved as new products were released, with recent incarnations offering support for more advanced programming techniques such as object-orientation and multithreading. This led to the languages losing their BASIC moniker in later years. History The first iteration of the Blitz language was created for the Amiga platform and published by the Australian firm Memory and Storage Technology. Returning to New Zealand, Blitz BASIC 2 was published several years later (around 1993 according this press release ) by Acid Software (a local Amiga game publisher). Since then, Blitz compilers have been released on several platforms. Following the demise of the Amiga as a commercially viable platform, the Blitz BASIC 2 source code was released to the Amiga community. Development continues to this day under the name AmiBlitz. BlitzBasic Idigicon published BlitzBasic for Microsoft Windows in October 2000. The language included a built-in API for performing basic 2D graphics and audio operations. Following the release of Blitz3D, BlitzBasic is often synonymously referred to as Blitz2D. Recognition of BlitzBasic increased when a limited range of "free" versions were distributed in popular UK computer magazines such as PC Format. This resulted in a legal dispute between the developer and publisher which was eventually resolved amicably. BlitzPlus In February 2003, Blitz Research Ltd. released BlitzPlus also for Microsoft Windows. It lacked the 3D engine of Blitz3D, but did bring new features to the 2D side of the language by implementing limited Microsoft Windows control support for creating native GUIs. Backwards compatibility of the 2D engine was also extended, allowing compiled BlitzPlus games and applications to run on systems that might only have DirectX 1. BlitzMax The first BlitzMax compiler was released in December 2004 for Mac OS X. This made it the first Blitz dialect that could be compiled on *nix platforms. Compilers for Microsoft Windows and Linux were subsequently released in May 2005. BlitzMax brought the largest change of language structure to the modern range of Blitz products by extending the type system to include object-oriented concepts and modifying the graphics API to better suit OpenGL. BlitzMax was also the first of the Blitz languages to represent strings internally using UCS-2, allowing native-support for string literals composed of non-ASCII characters. BlitzMax's platform-agnostic command-set allows developers to compile and run source code on multiple platforms. However the official compiler and build chain will only generate binaries for the platform that it is executing on. Unofficially, users have been able to get Linux and Mac OS X to cross-compile to the Windows platform. BlitzMax is also the first modular version of the Blitz languages, improving the extensibility of the command-set. In addition, all of the standard modules shipped with the compiler are open-source and so can be tweaked and recompiled by the programmer if necessary. The official BlitzMax cross-platform GUI module (known as MaxGUI) allows developers to write GUI interfaces for their applications on Linux (FLTK), Mac (Cocoa) and Windows. Various user-contributed modules extend the use of the language by wrapping such libraries as wxWidgets, Cairo, and Fontconfig as well as a selection of database modules. There are also a selection of third-party 3D modules available namely MiniB3D - an open-source OpenGL engine which can be compiled and used on all three of BlitzMax's supported platforms. In October 2007, BlitzMax 1.26 was released which included the addition of a reflection module. BlitzMax 1.32 shipped new threading and Lua scripting modules and most of the standard library functions have been updated so that they are unicode friendly. Blitz3D SDK Blitz3D SDK is a 3D graphics engine based on the engine in Blitz3D. It was marketed for use with C++, C#, BlitzMax, and PureBasic, however it could also be used with other languages that follow compatible calling conventions. Max3D module In 2008, the source code to Max3D - a C++-based cross-platform 3D engine - was released under a BSD license. This engine focused on OpenGL but had an abstract backend for other graphics drivers (such as DirectX) and made use of several open-source libraries, namely Assimp, Boost, and ODE. Despite the excitement in the Blitz community of Max3D being the eagerly awaited successor to Blitz3D, interest and support died off soon after the source code was released and eventually development came to a halt. There is no indication that Blitz Research will pick up the project again. Open-source release BlitzPlus was released as open-source on 28 April 2014 under the zlib license on GitHub. Blitz3D followed soon after and was released as Open Source on 3 August 2014. BlitzMax was later released as Open Source on 21 September 2015. Examples Hello World program that prints to the screen, waits until a key is pressed, and then terminates: Print "Hello World" ; Prints to the screen. WaitKey() ; Pauses execution until a key is pressed. End ; Ends Program.Program that demonstrates the declaration of variables using the three main data types (Strings, Integers and Floats) and printing them onto the screen:name$ = "John" ; Create a string variable ($) age = 36 ; Create an integer variable (No Suffix) temperature# = 27.3 ; Create a float variable (#) print "My name is " + name$ + " and I am " + age + " years old." print "Today, the temperature is " + temperature# + " degrees." Waitkey() ; Pauses execution until a key is pressed. End ; Ends program. Program that creates a windowed application that shows the current time in binary and decimal format. See below for the BlitzMax and BlitzBasic versions: Software written using BlitzBasic Eschalon: Book I - BlitzMax Eschalon: Book II - BlitzMax Fairway Solitaire - BlitzMax GridWars - BlitzMax TVTower (open source clone of MadTV) - BlitzMax Platypus - Blitz2D (Mac port, BlitzMax) SCP – Containment Breach - Blitz3D Worms - originally titled Total Wormage and developed in Blitz Basic on the Amiga before its commercial release Legacy In 2011, BRL released a new cross-platform programming language called Monkey and its first official module called Mojo. Monkey has a similar syntax to BlitzMax, but instead of compiling direct to assembly code, it translates Monkey source files directly into source code for a chosen language, framework or platform e.g. Windows, Mac OS X, iOS, Android, HTML5, and Adobe Flash. Development of Monkey X has been halted in favor of Monkey 2, an updated version of the language by Mark Sibly. References External links Blitz Research subsite on itch.io (BlitzPlus, Blitz 3D, Monkey X, Monkey 2) Monkey X subsite (open source) Monkey 2 subsite blitz-research (Mark Sibly) on GitHub (BlitzPlus, BlitzMax, Blitz3D, Monkey, BlitzMax, Blitz3D for MSVC-CE 2017) Blitz Research website (archived 3 June 2017) Monkey X website (archived 15 July 2017) Amiga development software Articles with example BASIC code BASIC compilers BASIC programming language family Formerly proprietary software Free game engines Free software Object-oriented programming languages Software using the zlib license Video game development software Video game IDE
30770392
https://en.wikipedia.org/wiki/Dell%20Wyse
Dell Wyse
Wyse is an American manufacturer of cloud computing systems. They are best known for their video terminal line introduced in the 1980s, which competed with the market-leading Digital. They also had a successful line of IBM PC compatible workstations in the mid-to-late 1980s, but were outcompeted by companies such as Dell starting late in the decade. Current products include thin client hardware and software as well as desktop virtualization solutions. Other products include cloud software-supporting desktop computers, laptops, and mobile devices. Dell Cloud Client Computing is partnered with IT vendors such as Citrix, IBM, Microsoft, and VMware. On April 2, 2012, Dell and Wyse announced that Dell intended to take over the company. With this acquisition Dell surpassed their rival Hewlett-Packard in the market for thin clients. On May 25, 2012 Dell informed the market that it had completed the acquisition, renaming the company Dell Wyse. History 1980s Wyse Technology was founded in 1981 by Garwing Wu, Bernard Tse, and Grace Tse. The company became famous in the 1980s as a manufacturer of character terminals. Most of these terminals can emulate several other terminal types in addition to their native escape sequences. These terminals were often used with library card catalogs such as Dynix. In 1983, Wyse began shipping the WY50, a terminal that was priced some 44 percent lower than its nearest competitor. It became their first big-selling product, and had a larger screen and higher resolution than competitor products at the time. Following the WY50 was the WY60, the best-selling general purpose terminal of all time. In addition to standard character-mode operation, the WY60 supported box graphics that could be used to produce more attractive displays. The Wyse 99GT and 160 terminals added graphical capability through Tektronix 4014 emulation. The WY325 and 375 models added color support with Tektronix graphics. In 1984, Wyse entered the personal computer marketplace. The first of these was the Wyse 1000, a computer based on the Intel 80186 (which did not see huge volumes because its integrated hardware was incompatible with the hardware used in the original IBM PC). Next came the WYSEpc, an IBM-compatible computer based on the 8088 processor, which had a good following due to its slim-line design. Later, Wyse introduced personal computers compatible with the IBM PC/AT based on the 80286 and 80386, which were top sellers. Wyse sold through 2-tier distribution, which limited growth in the late 1980s as mail order companies like Dell and Gateway entered the marketplace. In 1984 Wyse became one of the leaders in the general purpose text (GPT) terminal industry and on August 17, 1984 went public on the New York Stock Exchange. In the following years, Wyse added the PC product line Wyse pc3216. The Wyse 3216 was based on Intel's newest 386 chip. It sold for $1,500 less than a comparable Compaq DeskPro, $2,000 less than an IBM System 80, and performed at a higher speed than both. In 1989, Wyse developed LAN-attached communication devices. 1990s Wyse was an early innovator in off-shore electronics production, with its products being built in Taiwan in company owned facilities. In 1990 Dr. Morris Chang organized Channel International, a Taiwan consortium, which gathered business owners together and was a booster for Taiwanese individuals owning U.S. companies. In 1990, Channel International acquired Wyse. From 1990 to 1994 Wyse focused on PCs with CPU upgradability. Wyse created a proprietary upgradability concept called Modular Systems Architecture, or MSA. In October 1992, Wyse became ISO 9001:2000 certified. In the mid-1990s Wyse Taiwan became the parent company of Wyse Technology. As the PC and server industry became more competitive, in 1994 Wyse management began to focus on making the next generation of terminals. Four employees were directed to investigate and chart the next product course for the company. In 1994, executives Curt Schwebke and Jeff McNaught proposed a new type of terminal that would combine the low costs of terminals with the advanced display capability of Windows PCs. A year of R&D resulted in the most advanced terminals Wyse had developed to date. They worked on enabling them to support the graphics and capabilities needed to display Microsoft Windows and Internet applications. In late 1994, the company developed two thin client prototypes, and selected Citrix, then a small company, to provide the protocol and server side of the model. The machines differed from traditional text-mode terminals by supporting modern GUI applications using a mouse and windowing systems. The clients are able to access these applications using protocols that send drawing commands or raw pixel data (instead of strings of text characters) over the data connection. Because of the greater bandwidth this requires these machines typically use ethernet connections to the server, rather than the RS-232 links used in the past. In November 1995, Citrix and Wyse shared a booth at the Comdex tradeshow. Wyse introduced the Winterm windows terminal (now referred to as a thin client) models 2000 and 2500. Citrix introduced WinFrame, the Windows NT-based “Windows mainframe” software it connected to. At the show, the Wyse Winterm was awarded the “Best of Comdex” award. Later, Wyse secured a patent (# 5918039) for the thin client design. In 1997, Microsoft released Windows NT Terminal Service Edition, which supported the Wyse thin clients. After the thin clients were well received by the market, Wyse introduced several additional models, including stand-alone (Winterm 2300), LCD monitor-integrated (Winterm 2600), and the tablet-shaped mobile Winterm 2900 and 2930 models. In 1997, Wyse introduced the first thin-client remote management software system, Wyse Remote Administrator. In 1999, Wyse Technology once again went public, but this time on the Taiwan Stock Exchange (TSE). 2000-present In 2000 Wyse acquired Netier Technologies of Texas, and turned Netier's Rapport thin device management software into the Wyse Device Manager. In 2003 Wyse went private and company shareholders reorganized the company, selling assets such as real estate and company-owned manufacturing facilities in favor of contract manufacturing. In April 2005 the controlling interest of Wyse was acquired by Garnett & Helfrich Capital, a private equity firm specializing in venture buyouts. In 2005, Wyse, working closely with Citrix, Microsoft, and VMware, expanded thin clients to support the newly introduced virtual desktop infrastructure (VDI). In April, Wyse and IBM signed a Joint Initiative Agreement (JIA). Tarkan Maner was appointed CEO in February 2007. Under Maner, the company significantly expanded research and development. In August 2007, Wyse recapitalized, with overseas investors regaining the controlling interest from Garnett & Helfrich Capital. In March 2008 the company formalized a partnership with Novell. In October of that year, Wyse formed a global partnership with IBM under the Global Alliance Agreement. In August 2010, Wyse created its Mobile Cloud Business Unit with the introduction of Wyse PocketCloud. The mobile cloud app allows users to access their desktop on iOS or Android devices. In the same month, Wyse became ISO 9001:2008 certified, and in November became ISO 14001:2004 certified and announced a "Strategic Collaboration Partnership with Cisco. The company introduced zero clients in 2010. According to the IDC, as of 2011 Wyse is an international leader in what are called "enterprise devices" (terminal clients and thin clients combined). In April 2012, Dell announced an agreement to purchase Wyse for an undisclosed amount. The acquisition was completed on May 25, 2012. Recent awards Education Investor Award 2011 Finalist: Technology Supplier of the Year Wyse Voted as 2011 Top Work Place 2011 Microsoft Windows Embedded OEM Partner Excellence Award 2011 Mobile Merit Awards Winners Announced! - Wyse PocketCloud TechAmerica Foundation Announces 2011 American Technology Awards Finalists — Wyse Xenith The Top 20 Cloud Software & Apps Vendors of 2011 2011 Appy Awards Winner — Productivity Category — Wyse PocketCloud Thin / Zero Client Computing (Winner) - Wyse Xenith 1.0 Tech & Learning Leader of the Year award Notable employees Martin Eberhard began his career as an electrical engineer at Wyse Technology, where he designed the WY-30 ASCII computer terminal as his first product. Eberhard went on to be a founder of Tesla Motors. David Dix worked first on the very first Wyse terminals and later the high end personal computers. David also worked at HP prior to Wyse. Dix is working at ShoreTel. Wyse CTO Curt Schwebke and CMO Jeff McNaught prototyped and led the design of the first Winterm products. They are also holders of the first thin client patent. Tarkan Maner was appointed CEO in February 2007. Facilities Wyse Technology is headquartered in Silicon Valley in Santa Clara, California. It also has development centers in India, and Beijing, China. It has sales offices around the United States and in: New South Wales, Australia Beijing, China Bangalore, India Tokyo, Japan United Kingdom Germany Netherlands France Italy Turkey Spain Environmental initiatives Wyse has published research on the environmental benefits of cloud client computing. According to the company, to minimize environmental impact, their cloud client computing products are smaller than that of competitors. Up to 90 percent of Wyse products can be recycled, and the hardware meets WEEE recycling process guidelines. The company also has an e-waste recycling program. Products Software Management Software Wyse Management Suite – Wyse enterprise-class server software that is either on-prem or on public-cloud and allows easy configuration and management of just a few to many thousands of Wyse thin clients Virtualization Software Wyse Converter for PCs – Wyse software that converts fat clients into thin client-like devices with a combination of both local and server based computation for increased security, at the same time leverage existing PC investments. Wyse TCX – Wyse software that resides on Wyse cloud clients to accelerate and enhance the user desktop experience Hardware Thin Clients S10 – Economical, compact thin client running Wyse ThinOS operating system. C10 – Compact thin client running Wyse ThinOS operating system. V10LE – Expandable thin client running Wyse ThinOS operating system. Supports dual video and numerous I/O options R10L – Very thin client running ThinOS operating system, supports multiple video displays and is suited to high-end users running demanding multimedia apps S30 – Economical, compact thin client running Windows CE operating system C30LE – Compact thin client running Windows Embedded operating system. V30LE – Expandable thin client running Windows Embedded Compact operating system. Supports dual video and numerous I/O options C50LE – Compact thin client running a Linux operating system. T50 – Compact, economical thin client running Ubuntu Linux operating system. Sets a new price/performance standard for thin clients. V50LE – Expandable thin client running Linux operating system. Supports dual video and numerous I/O options. R50L – High performance thin client running Linux. Supports dual video and numerous I/O options. R50LE – The R50L with an expansion slot to add more connectivity options. C90LE – Compact thin client running Windows XPe operating system. V90LE – Expandable thin client running Windows XPe operating system. R90L – High performance thin client running Windows XPe operating system. Supports dual video and numerous I/O options. R90LE – The R90L with an expansion slot to add more connectivity options. C90LEW – Compact thin client running Windows Embedded Standard 2009 operating system. V90LEW – Expandable thin client running Windows Embedded Standard 2009 operating system. R90LW – High performance thin client running Windows Embedded Standard 2009 operating system. Supports dual video and numerous I/O options. R90LEW – The R90LW with an expansion slot to add more connectivity options. Z90SW – Wyse's highest performance single-core processor thin client running Windows Embedded Standard 2009 operating system. Supports dual hi-def video and numerous I/O options. Z90DW – The Z90SW except dual-core C90LE7 – Compact, thin client running Windows Embedded Standard 7 operating system. R90L7 – High performance thin client running Windows Embedded Standard 7 operating system. Supports dual video and numerous I/O options. Z90S7 – Wyse's highest performance single-core processor thin client running Windows Embedded Standard 7 operating system. Supports dual hi-def video and numerous I/O options. Z90D7 – The Z90S7 with a dual-core processor Zero Clients E02 – Wyse zero client for use with Microsoft Windows Multipoint Server (WMS) 2011 Xenith / Xenith Pro – Wyse zero client family for Citrix. Both are designed for Citrix HDX environments, Xenith Pro offers extra performance and connectivity options for high-end, demanding multimedia applications. P20 – Wyse zero client for VMware. Leverages on-chip PCoIP processing to increase performance and graphics display. Cloud PCs C00LE – Compact cloud PC V00LE – Expandable cloud PC supporting dual video and numerous I/O options R00L – High performance cloud PC with dual video and numerous I/O options R00LE – The R00L with an expansion slot to add more connectivity options. Z00D – Wyse's highest performance cloud PC with dual hi-def video and numerous I/O options. Mobile Clients X50c – Mobile thin client running Linux operating system X90cw – Mobile thin client running Windows Embedded Standard 2009 operating system with an 11.6” display. X90c7 – Similar to the X90cw, except it runs on Windows Embedded Standard 7 X90mW – Mobile thin client running Windows Embedded Standard 2009 operating system, dual-core processor, and a 14” display. 7492-X90m7 – Similar to the X90mW except it runs on Windows Embedded Standard 7 operating system. Terminals WYSE-100 - First terminal WYSE-50 - the first really mass market terminal WYSE-30 - New design, more features WYSE-60 - Another step up in product capabilities Personal Computers WYSE-1000 - Wyse's first computer - 80186 based computer - paired with WYSE-50 monitor. Ran MS-DOS 3.2. WYSEpc - Wyse first IBM compatible computer. WYSE-2100 Series - Wyse computers utilizing 80286 processors, also featured passive backplane to allow CPU upgrades after purchase. WYSE-3100 Series - Wyse computer utilizing 80386 processors, also featured passive backplane to allow CPU upgrades after purchase. See also Cloud computing Desktop virtualization Green IT Thin client Virtual desktop Virtualization References Further reading (July 2009) (August 2009) (December 2010) (January 2011) (July 2011) (September 2011) (September 2011) External links Wyse Computer companies of the United States Computer terminals Companies established in 1981 Thin clients Dell Companies based in San Jose, California Wyse
29536169
https://en.wikipedia.org/wiki/Key%20disclosure%20law
Key disclosure law
Key disclosure laws, also known as mandatory key disclosure, is legislation that requires individuals to surrender cryptographic keys to law enforcement. The purpose is to allow access to material for confiscation or digital forensics purposes and use it either as evidence in a court of law or to enforce national security interests. Similarly, mandatory decryption laws force owners of encrypted data to supply decrypted data to law enforcement. Nations vary widely in the specifics of how they implement key disclosure laws. Some, such as Australia, give law enforcement wide-ranging power to compel assistance in decrypting data from any party. Some, such as Belgium, concerned with self-incrimination, only allow law enforcement to compel assistance from non-suspects. Some require only specific third parties such as telecommunications carriers, certification providers, or maintainers of encryption services to provide assistance with decryption. In all cases, a warrant is generally required. Theory and countermeasures Mandatory decryption is technically a weaker requirement than key disclosure, since it is possible in some cryptosystems to prove that a message has been decrypted correctly without revealing the key. For example, using RSA public-key encryption, one can verify given the message (plaintext), the encrypted message (ciphertext), and the public key of the recipient that the message is correct by merely re-encrypting it and comparing the result to the encrypted message. Such a scheme is called undeniable, since once the government has validated the message they cannot deny that it is the correct decrypted message. As a countermeasure to key disclosure laws, some personal privacy products such as BestCrypt, FreeOTFE, and TrueCrypt have begun incorporating deniable encryption technology, which enable a single piece of encrypted data to be decrypted in two or more different ways, creating plausible deniability. Another alternative is steganography, which hides encrypted data inside of benign data so that it is more difficult to identify in the first place. A problematic aspect of key disclosure is that it leads to a total compromise of all data encrypted using that key in the past or future; time-limited encryption schemes such as those of Desmedt et al. allow decryption only for a limited time period. Criticism and alternatives Critics of key disclosure laws view them as compromising information privacy, by revealing personal information that may not be pertinent to the crime under investigation, as well as violating the right against self-incrimination and more generally the right to silence, in nations which respect these rights. In some cases, it may be impossible to decrypt the data because the key has been lost, forgotten or revoked, or because the data is actually random data which cannot be effectively distinguished from encrypted data. A proactive alternative to key disclosure law is key escrow law, where the government holds in escrow a copy of all cryptographic keys in use, but is only permitted to use them if an appropriate warrant is issued. Key escrow systems face difficult technical issues and are subject to many of the same criticisms as key disclosure law; they avoid some issues like lost keys, while introducing new issues such as the risk of accidental disclosure of large numbers of keys, theft of the keys by hackers or abuse of power by government employees with access to the keys. It would also be nearly impossible to prevent the government from secretly using the key database to aid mass surveillance efforts such as those exposed by Edward Snowden. The ambiguous term key recovery is applied to both types of systems. Legislation by nation This list shows only nations where laws or cases are known about this topic. Antigua and Barbuda The Computer Misuse Bill, 2006, Article 21(5)(c), if enacted, would allow police with a warrant to demand and use decryption keys. Failure to comply may incur "a fine of fifteen thousand [East Caribbean] dollars" and/or "imprisonment for two years." Australia The Cybercrime Act 2001 No. 161, Items 12 and 28 grant police with a magistrate's order the wide-ranging power to require "a specified person to provide any information or assistance that is reasonable and necessary to allow the officer to "access computer data that is "evidential material"; this is understood to include mandatory decryption. Failing to comply carries a penalty of 6 months' imprisonment. Electronic Frontiers Australia calls the provision "alarming" and "contrary to the common law privilege against self-incrimination." The Crimes Act 1914, 3LA(5) "A person commits an offence if the person fails to comply with the order. Penalty for contravention of this subsection: Imprisonment for 2 years." Belgium The Loi du 28 novembre 2000 relative à la criminalité informatique (Law on computer crime of 28 November 2000), Article 9 allows a judge to order the authorities to search the computer systems and telecommunications providers to provide assistance to law enforcement, including mandatory decryption, and to keep their assistance secret; but this action cannot be taken against suspects or their families. Failure to comply is punishable by 6 months to 1 year in jail and/or a fine of 130 to 100,000 euros. Cambodia Cambodia promulgated its Law on Electronic Commerce on 2 November 2019, after passage through legislature and receiving consent from the monarch, becoming the last among ASEAN states to adopt a domestic law governing electronic commerce. Article 43 of the statute prohibits any encryption of evidence in the form of data that could lead to an indictment, or any evidence in an electronic system that relates to an offense. This statutory obligation may imply that authorities could order decryption of any data implicated in an investigation. While remaining untested in courts, this obligation actively contradicts an accused person's procedural right against self-incrimination as provided under Article 143 of the Code of Criminal Procedure. Canada In Canada key disclosure is covered under the Canadian Charter of Rights and Freedoms section 11(c) which states "any person charged with an offence has the right not to be compelled to be a witness in proceedings against that person in respect of the offence;" and protects the rights of individuals that are both citizens and non-citizens of Canada as long as they are physically present in Canada. In a 2010 Quebec Court of Appeal case the court stated that a password compelled from an individual by law enforcement "is inadmissible and that renders the subsequent seizure of the data unreasonable. In short, even had the seizure been preceded by judicial authorization, the law will not allow an order to be joined compelling the respondent to self-incriminate." In a 2019 Ontario court case (R v. Shergill), the defendant was initially ordered to provide the password to unlock his phone. However, the judge concluded that providing a password would be tantamount to self-incrimination by testifying against oneself. As a result, the defendant was not compelled to provide his password. Czech Republic In the Czech Republic there is no law specifying obligation to issue keys or passwords. Law provides protecting against self-incrimination, including lack of penalization for refusing to answer any question which would enable law enforcement agencies to obtain access to potential evidence, which could be used against testifying person. Finland The Coercive Measures Act (Pakkokeinolaki) 2011/806 section 8 paragraph 23 requires the system owner, its administrator, or a specified person to surrender the necessary "passwords and other such information" in order to provide access to information stored on an information system. The suspect and some other persons specified in section 7 paragraph 3 that cannot otherwise be called as witnesses are exempt from this requirement. France (Law #2001-1062 of 15 November 2001 on Community Safety) allows a judge or prosecutor to compel any qualified person to decrypt or surrender keys to make available any information encountered in the course of an investigation. Failure to comply incurs three years of jail time and a fine of €45,000; if the compliance would have prevented or mitigated a crime, the penalty increases to five years of jail time and €75,000. Germany The German Code of Criminal Procedure grants a suspect the right to deny cooperation in an investigation that may lead to incriminating information to be revealed about themselves. For private usage is no legal basis that would compel a suspect to hand over any kind of cryptographic key due to this nemo tenetur principle. There are different laws (tax, crime, etc.) stating that companies must ensure this data is readable by the government. This includes the need to disclose the keys or unencrypted content as and when required. Iceland In Iceland there is no law specifying obligation to issue keys or passwords. India Section 69 of the Information Technology Act, as amended by the Information Technology (Amendment) Act, 2008, empowers the central and state governments to compel assistance from any "subscriber or intermediary or any person in charge of the computer resource" in decrypting information. Failure to comply is punishable by up to seven years' imprisonment and/or a fine. Ireland Section 7(4)(b) of the Criminal Justice (Offences Relating to Information Systems) Act 2017 allows a member of the Garda Síochána or other persons as deemed necessary (via a search warrant issued by a judge of the District Court (Section 7(1))) to demand the disclosure of a password to operate a computer and any decryption keys required to access the information contained therein.7(4) A member acting under the authority of a search warrant under this section may— (a) operate any computer at the place that is being searched or cause any such computer to be operated by a person accompanying the member for that purpose, and (b) require any person at that place who appears to the member to have lawful access to the information in any such computer— (i) to give to the member any password necessary to operate it and any encryption key or code necessary to unencrypt the information accessible by the computer, immediate data destruction (ii) otherwise to enable the member to examine the information accessible by the computer in a form in which the information is visible and legible, or (iii) to produce the information in a form in which it can be removed and in which it is, or can be made, visible and legible. New Zealand New Zealand Customs was seeking power to compel key disclosure. Although New Zealand may not have a key disclosure law, they have since enforced penalties against travelers unwilling to unlock mobile devices when compelled to do so by officials. Poland In relatively few known cases in which police or prosecutor requested cryptographic keys from those formally accused and these requests were not fulfilled, no further consequences were imposed on the accused. There's no specific law in this matter, as e.g. in the UK. It is generally assumed that the Polish Criminal Procedure Code (Kodeks Postępowania Karnego Dz.U. 1997 nr 89 poz. 555.) provides means of protecting against self-incrimination, including lack of penalization for refusing to answer any question which would enable law enforcement agencies to obtain access to potential evidence, which could be used against testifying person. South Africa Under the RICA Act of 2002, refusal to disclose a cryptographic key in your possession could result in a fine up to ZAR 2 million or up to 10 years' imprisonment. This requires a judge to issue a decryption direction to a person believed to hold the key. Spain Spain's Criminal Procedure Law grants suspects rights against self-incrimination, and this would prevent the suspect from being compelled to reveal passwords. However, a judge may order third parties to collaborate with any criminal investigation, including revealing decryption keys, where possible. Sweden There are currently no laws that force the disclosure of cryptographic keys. However, there is legislation proposed on the basis that the Council of Europe has already adopted a convention on cyber-crime related to this issue. The proposed legislation would allow police to require an individual to disclose information, such as passwords and cryptographic keys, during searches. The proposal has been introduced to make it easier for police and prosecutors. The proposal has been criticized by the Swedish Data Protection Authority. Switzerland In Switzerland there is no law specifying obligation to issue keys or passwords. The Netherlands Article 125k of the Wetboek van Strafvordering allows investigators with a warrant to access information carriers and networked systems. The same article allows the district attorney and similar officers of the court to order persons who know how to access those systems to share their knowledge in the investigation, including any knowledge of encryption of data on information carriers. However, such an order may not be given to the suspect under investigation. United Kingdom The Regulation of Investigatory Powers Act 2000 (RIPA), Part III, activated by ministerial order in October 2007, requires persons to decrypt information and/or supply keys to government representatives to decrypt information without a court order. Failure to disclose carries a maximum penalty of two years in jail, or five years in the cases of national security or child indecency. The provision was first used against animal rights activists in November 2007, and at least three people have been prosecuted and convicted for refusing to surrender their encryption keys, one of whom was sentenced to 13 months' imprisonment. Even politicians responsible for the law have voiced concerns that its broad application may be problematic. (9) of section 49 failed to consider that mere authentication can be used in a way analogous to encryption, making it possible to circumvent the law via chaffing and winnowing. In 2017, schedule 7 of the Terrorism Act 2000 was used to charge Muhammad Rabbani with "wilfully obstructing or seeking to frustrate a search examination" after allegedly refusing to disclose passwords. He was later convicted. In 2018, Stephen-Alan Nicholson, the prime suspect in a murder case, was charged with refusing to provide his Facebook password to police. United States The Fifth Amendment to the United States Constitution protects witnesses from being forced to incriminate themselves, and there is currently no law regarding key disclosure in the United States. However, the federal case In re Boucher may be influential as case law. In this case, a man's laptop was inspected by customs agents and child pornography was discovered. The device was seized and powered-down, at which point disk encryption technology made the evidence unavailable. The judge held that it was a foregone conclusion that the content exists since it had already been seen by the customs agents, Boucher's encryption password "adds little or nothing to the sum total of the Government's information about the existence and location of files that may contain incriminating information." In another case, a district court judge ordered a Colorado woman to decrypt her laptop so prosecutors can use the files against her in a criminal case: "I conclude that the Fifth Amendment is not implicated by requiring production of the unencrypted contents of the Toshiba Satellite M305 laptop computer," Colorado U.S. District Judge Robert Blackburn ruled on January 23, 2012. In Commonwealth v. Gelfgatt, the court ordered a suspect to decrypt his computer, citing exception to Fifth Amendment can be invoked because "an act of production does not involve testimonial communication where the facts conveyed already are known to the government...". However, in United States v. Doe, the United States Court of Appeals for the Eleventh Circuit ruled on 24 February 2012 that forcing the decryption of one's laptop violates the Fifth Amendment. The Federal Bureau of Investigation may also issue national security letters that require the disclosure of keys for investigative purposes. One company, Lavabit, chose to shut down rather than surrender its master private keys due to the government wanting to spy on Edward Snowden's emails. Since the summer of 2015, cases have been fought between major tech companies such as Apple over the regulation of encryption with government agencies asking for access to private encrypted information for law enforcement purposes. A technical report was written and published by MIT Computer Science and Artificial Intelligence Laboratory, where Ronald Rivest, an inventor of RSA, and Harold Abelson, a computer science professor at MIT with others, explain the technical difficulties, including security issues that arise from the regulation of encryption or by making a key available to a third party for purposes of decrypting any possible encrypted information. The report lists scenarios and raises questions for policy makers. It also asks for more technical details if the request for regulating encryption is to be pursued further. In 2019, the Pennsylvania Supreme Court, in a ruling that only controls for that state's law, held that a suspect in a child pornography case could not be compelled to reveal his password, despite telling the police "We both know what's on there." See also Deniable encryption FBI–Apple encryption dispute Secret sharing Rubber-hose cryptanalysis Crypto Wars References Further reading Bert-Jaap Koops. Bert-Jaap Koops homepage: Crypto Law Survey: Overview per country. Version 26.0. Universiteit van Tilburg. July 2010. Stephen Mason, gen ed, Electronic Evidence (3rd edn, LexisNexis Butterworths, 2012) Chapter 6 Encrypted data List of legal case studies Cryptography law Encryption debate
13462045
https://en.wikipedia.org/wiki/Man%3A%20A%20Course%20of%20Study
Man: A Course of Study
Man: A Course of Study, usually known by the acronym MACOS or M.A.C.O.S., was an American humanities teaching program, initially designed for middle school and upper elementary grades, and popular in America and Britain in the 1970s. It was based on the theories of Jerome Bruner, particularly his concept of the "spiral curriculum". This suggested that a concept might be taught repeatedly within a curriculum, but at a number of levels, each level being more complex than the first. The process of repetition would thus enable the child to absorb more complex ideas easily. In MACOS, the concept was "the chain of life" or a "lifeline": the entire history of a living thing. The course started with a simple lifespan in the form of the Pacific Coast salmon. It then moved on to the more complex life form of the herring gull, introducing concepts such as nurturing. The lifespan of the baboon was next examined, particularly within the societal context afforded by the baboon troop. The differences between innate behaviour and learned behaviour were introduced. Finally, the study opened up into a study of a man's lifespan with a case study of Netsilik Inuit. This also included the interaction between the Netsilik and other life forms, such as reindeer and seals. The course comprised a self-contained kit of course materials, film cassettes, visual aids, and games. Some of the activities were very imaginative; a game based upon reindeer migration had a loaded die to introduce discussion about instincts, and a paper seal would be cut up and shared among class members representing various people in the Netsilik community, according to a ritual governing who was entitled to which part of the animal. The emphasis of the course was upon learning particular skills within the teaching process, not upon the significance of the content. This included the necessity to ask questions, discuss, and reach conclusions based on evidence and argument. The course was much criticized in the United States because of its emphasis upon questioning aspects of the American tradition, including Western paradigms of belief and morality. Critics also challenged its basis in empirical science; Robert Flaherty's Nanook of the North was initially misrepresented as a scientific report on the Inuit society, but later recognized as a revolutionary film and considered the intellectual ancestor of the documentary form. The course booklet itself contained criticisms of its validity, particularly from fundamentalist groups. In 2004, the National Film Board of Canada produced Through These Eyes, a documentary about the controversy surrounding MACOS, and more generally about the interplay between politics and education. References External links Digital archive of the MACOS curriculum materials "Education: Teaching Man to Children" from Time magazine Curricula Alternative education Early childhood education materials Visual anthropology Controversies in the United States
45464244
https://en.wikipedia.org/wiki/Hierarchical%20Cluster%20Engine%20Project
Hierarchical Cluster Engine Project
Hierarchical Cluster Engine (HCE) is a FOSS complex solution for: construct custom network mesh or distributed network cluster structure with several relations types between nodes, formalize the data flow processing goes from upper node level central source point to down nodes and backward, formalize the management requests handling from multiple source points, support native reducing of multiple nodes results (aggregation, duplicates elimination, sorting and so on), internally support powerful full-text search engine and data storage, provide transactions-less and transactional requests processing, support flexible run-time changes of cluster infrastructure, have many languages bindings for client-side integration APIs in one product build on C++ language. This project became the successor of Associative Search Machine (ASM) full-text web search engine project that was developed from 2006 to 2012 by IOIX Ukraine The HCE project products The hce-node core (HCE-node application) network transport cluster infrastructure engine. The Bundle: Distributed Crawler service (HCE-DC), Distributed Tasks Manager service (HCE-DTM), PHP language API and console management tools, Python language API and management tools. Python data processing algorithms Utilities. All of them are the set of applications that can be used to construct different distributed solutions like: remote processes execution management, data processing (including the text mining with NLP), web sites crawling (including incremental, periodic, with flexible and adaptive scheduling, RSS feeds and custom structured), web sites data scraping (include pre-defined and custom scrapers, xpath templates, sequential and optimized scraping algorithms), web-search engine (complete cycle including the crawling, scraping and distributed search index based on the Sphinx indexing engine), corporate integrated full-text search based on distributed Sphinx engine index and many more another applied solutions with similar business logic HCE-node application The heart and main component of the HCE project it is hce-node application. This application integrates complete set of base functionality to support network infrastructure, hierarchical cluster construction, full-text search system integration and so on. Implemented for Linux OS environment and distributed in form of source code tarball archive and Debian Linux binary package with dependencies packages. Supports single instance configuration-less start or requires set of options that used to build correspondent network cluster architecture. Supposes usage with client-side applications or integrated IPI. First implementation of client-side API and cli utilities bind on PHP. HCE application area: As a network infrastructure and messages transport layer provider – the HCE can be used in any big-data solution that needs some custom network structure to build distributed high-performance easy scalable vertically and horizontally data processing or data-mining architecture. As a native internally supported full text search engine interface provider – the HCE can be used in web or corporate network solutions that needs smoothly integrated with usage of natural target project specific languages, fast and powerful full text search and NOSQL distributed data storage. Now the Sphinx (c) search engine with extended data model internally supported. AS a Distributed Remote Command Execution service provider – the HCE can be used for automation of administration of many host servers in ensemble mode for OS and services deployment, maintenance and support tasks. Hierarchical Cluster as engine: Provides hierarchical cluster infrastructure – nodes connection schema, relations between nodes, roles of nodes, requests typification and data processing sequences algorithms, data sharding modes, and so on. Provides network transport layer for data of client application and administration management messages. Manages native supported integrated NOSQL data storage Sphinx (c) search index and Distributed Remote Command Execution. Collect, reduce and sort results of native and custom data processing. Ready to support transactional messages processing. Hce-node roles in the cluster structure: Internally HCE-node application contains seven basic handler threads. Each handler acts as special black-box messages processor/dispatcher and used in combination with other to work in one of five different roles of node: Router – upper end-point of cluster hierarchy. Has three server-type connections. Handles client API, any kind of another node roles instances (typically, shard or replica managers) and admin connections. Shard manager – intermediate-point of cluster hierarchy. Routes messages between upper and down layers. Uses data sharding and messages multicast dispatching algorithms. Has two server-type and one client connections. Replica manager – the same as shard manager. Routes messages between upper and down layers uses data balancing and messages round-robin algorithms. Replica – down end-point of cluster hierarchy. Data node, interacts with data storage and/or process data with target algorithm(s), provides interface with fill-text search engine, target host for Distributed Remote Commands Execution. Has one server- and one client-side connections used for cluster infrastructure Also can to have several data storage-dependent connections. Bundle Both DTM and DC applications provided with set of functional tests and demo operations automation scripts based on Linux shell. The Bundle distribution provided as zip archive that needs some environmental support to get functionality be ready. Distributed Crawler service (HCE-DC) It is a Linux OS daemon application that implements business-logic functionality of distributed web crawler and document data processor. It is based on the DTM application main functionality and hce-node DRCE Functional Object functionality and uses web crawling, processing and another related tasks as an isolated session executable modules with common business logic encapsulation. Also, the crawler contains raw contents storage subsystem based on file system (can be customized to support key-value storage or SQL). This application uses several DRCE Clusters to construct network infrastructure, MySQL and sqlite back-end for indexed data (Sites, URLs, contents and configuration properties) as well as a key-value data storage for the processed contents of pages or documents. Additionally an administrative user interface web-application available to manage easy and flexible way. Also, this UI implements several automation of data collect and process with algorithms using schedules, aggregation of scraped data from several projects, creation of data archives, exporting data to the external SQL database with custom schema, many statistical reports of periodic data crawling and scraping activity and many more. Distributed Tasks Manager service (HCE-DTM) It is a Linux OS multi-thread daemon application that implements business-logic functionality of tasks management that uses the DRCE Cluster Execution Environment to manage tasks as remote processes. It implements general management operations for distributed tasks scheduling, execution, state check, OS resources monitoring and so on. This application can be used for parallel tasks execution with state monitoring on hierarchical network cluster infrastructure with custom nodes connection schema. It is multipurpose application aimed to cover needs of projects with big data computations, distributed data processing, multi-host data processing with OS system resources balancing, limitations and so on. It supports several balancing modes including multicast, random, round-robin and system resource usage algorithms. Also, provides high level state check, statistics and diagnostic automation based on natural hierarchy and relations between nodes. Supports messages routing as a tasks and data balancing method or a tasks management. Utilities It is set of different by role and functionality separated console applications that can be united in some chains to get sequential data processing on server-side functionality and can be used as self-sufficient tools. Utilities designed to get common functional units for typical web projects that need to get huge data from web or another sources, pars, convert and process it. It supports unified input-output interface and json format of messages interaction. First implementation of utilities application is a Highlighter: this is utility to get fast parallel multi-algorithmic textual patterns highlighting. It provides cli UI, works as filter console tool and uses json format of protocol messages for input and output interaction. Highlight is an algorithm of text processing that gets the search query string and textual context on input and returns textual content with marks of entrances of patterns from search query and additional stat information. Patterns usually are lexical words, but depend on stemming and tokenizing processes can be more complex constructions. Demo installations Several pre-configured VM images for VMware and VirtualBox are uploaded to get start process faster. The user name is “root” and password is the same. The target user for DTS archive is “hce”, password the same. VM files zipped at here License GNU General Public License version 2 References Internet search engines Free web crawlers Web crawlers
34922275
https://en.wikipedia.org/wiki/International%20Airport%20Centers%2C%20L.L.C.%20v.%20Citrin
International Airport Centers, L.L.C. v. Citrin
In International Airport Centers, L.L.C. v. Citrin, the Seventh Circuit Court of Appeals evaluated the dismissal of the plaintiffs' lawsuit for failure to state a claim based upon the interpretation of the word "transmission" in the Computer Fraud and Abuse Act, . Jacob Citrin had been employed by IAC, who had lent him a laptop for use while under their employment. Upon leaving IAC, he deleted the data on the laptop before returning it to IAC. The Court of Appeals decided to reverse the decision and reinstated IAC's lawsuit. Facts International Airport Centers, L.L.C. (IAC) is a group of companies in the real estate business. IAC employed the defendant, Jacob Citrin, to identify potential acquisitions and record data about these properties. IAC lent Citrin a laptop for this purpose. Citrin became self-employed and quit IAC, breaching his employment contract in the process. He deleted the data on the laptop before returning it to IAC, using a secure-erasure software that rendered files irrecoverable. This process destroyed data that he had collected for IAC in addition to data revealing improper workplace conduct. The provision of the Computer Fraud and Abuse Act provides that whoever "knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer" violates the Act. Citrin argued that erasing a file from a computer is not a "transmission." The district court agreed and dismissed the lawsuit since it determined that the deletion of the files did not violate the CFAA. However, Circuit Judge Posner examined the "transmission" of the secure-erasure program to the computer, stating that the method of transmission here – whether it was installed through a network or a disc – is irrelevant. "Damage" here included "any impairment to the integrity or availability of data, a program, a system, or information." Ruling The court ruled that Citrin's authorization terminated with his breach of his duty of loyalty in quitting, and that his actions were "exceeding authorized access", as defined by the CFAA to be "access[ing] a computer with authorization and…us[ing] such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter." While Citrin argued that his employment contract authorized him to "return or destroy" data in the laptop, it was unlikely that this was intended to authorize him to irreversibly destroy data that the company had no copies of, or data that incriminated him in misconduct. Therefore, the judgment was reversed with directions to reinstate the suit. This case was one of a series of cases that applied the CFAA to employee misconduct. Originally, the CFAA had been crafted to prevent criminal hacking in government interest computers. The definition of "authorization" used in Citrin would be later rejected by the Ninth Circuit court in LVRC Holdings v. Brekka in favor of a more narrow "active authorization" that is granted by an authority. See also Computer Fraud and Abuse Act United States v. Lori Drew United States v. Nosal References External links William Kane represented International Airport Centers Just Add Plaintiff: The Seventh Circuit’s Recipe for Instant Liability Under the CFAA United States Code: Title 18,1030. Fraud and related activity in connection with computers 7th Circuit Applies Computer Hacking Statute to Use of Trace Removers on Employee Laptops International Airport Centers v. Citrin: Employment Issues InsideCounsel: Combating employee misconduct United States Court of Appeals for the Seventh Circuit cases United States Internet case law United States federal criminal case law
26447288
https://en.wikipedia.org/wiki/Pakistan%20Customs
Pakistan Customs
Pakistan Customs () is one of the elite cadres of the Civil Services of Pakistan. It is serving as the guardian of Pakistan's borders against movement of contraband goods and encourage bona-fide trade. It provided a major source of revenue to the Government of Pakistan in the form of taxes levied. It also helps to protect the domestic industry and stimulates the trade. Pakistan Customs is manned by officers from the Pakistan Customs Service (PCS) which is one of the premier occupational group among Pakistan's civil services. Previously known as the “Customs & Excise Group”, it re-classified as Pakistan Customs Service (PCS) in November 2010, when the responsibilities of Sales Tax & Federal Excise were taken away and a new occupational service created to collect Sales Tax, Federal Excise and Income Tax, named as Inland Revenue Service (IRS). It has given the PCS officers a break to focus on their core function of guarding the nation’s borders against smuggling, illegitimate trade and money laundering. The role of Pakistan Customs Service has been greatly enhanced as a law enforcement agency with focus on border control. The anti-smuggling powers delegated previously by Pakistan Customs to Pakistan Rangers, Police, Frontier Constabulary and Levies are withdrawn in view of expansion of Pakistan Customs role in border regions. The shift in the role of Pakistan Customs to a Border Control Agency with substantial responsibility in safe-guarding country’s trade policies, intellectual property rights, transit trade, anti money laundering, anti smuggling is what appears to be the future of Pakistan Customs Service. Pakistan Customs is th only law enforcement agency, that also has the jurisdiction in the sea and operates in 200 nautical miles (370.4 km; 230.2 mi) called Pakistan Customs Waters to prevent smuggling activities independently and also carries out join operations with Pakistan Coast Guard and Pakistan Maritime. Ranks in Pakistan Customs Service Chairman FBR / Member Customs/ Chief Collector (North/South/Central) Collector of Customs Additional Collector of Customs Deputy / Assistant Collector of Customs Superintendent of Customs / Superintendent of Intelligence & Investigation / Superintendent of Preventive Services/ Principal Appraiser Deputy Superintendent of Customs/ Senior Intelligence Officer/ Inspector Preventive Services/ Appraising officer Inspector of Customs/ Intelligence Officer/Senior Preventive Officer / Examining Officer Office Superintendent Stenographer Stenotypist Wireless Operator Head Clerk Upper Division Clerk (UDC) Lower Division Clerk (LDC) Head Constable/ Hawaldar/ Kot Gusht Driver / Motor Boat Driver Constable / Sepoy / K9 Handler Dispatch Rider / Machine Operator / Daftari / Naib Qasid / Lasca The other ranks level are, Under inspector as a sub-inspector and under dept as a sub-dept. History of Pakistan Customs The origin of an organized Customs Department in the sub-continent can be traced to 1878 when maritime Customs operations were sought to be institutionalized by Her Majesty’s Crown under the Sea Customs Act. In 1901, Karachi was declared as the Chief Port of Sindh. In the following year, a plan was instituted to build permanent offices for the port and Customs officials at Karachi. The task was entrusted to Mr. G. Willet, the consulting architect to the Government , who designed the new building as a semi-circular structure in the Victorian tradition. The construction of the building commenced in 1912 and culminated in 1914. The first meeting of KPT and Customs was held in that building on 12 January 1916. After independence in 1947, the Sea Customs Act, 1878 continued to be the legal framework for Customs operations in Pakistan.The land Customs Estate Mauripur spreading over 4605.38 acres, was Occupied in three pieces of 960, 2720.38 and 925 acres in the year 1878, 1930 and 1935 from the revenue department, Karachi port Trust and Karachi Municipal Corporation respectively this land was acquired by the erstwhile Central Board Of Revenue, Finance Department, Government Of India. However, the need for a new Customs legislation was felt all along. The task of developing the new legislation was undertaken in 1966, by the First Secretary of the Central Board of Revenue and the Customs Act, 1969 was promulgated on 20 June 1968. History of Automation Federal Board of Revenue (previously Central Board of Revenue) established a company for automation, namely Systems Limited in 1988. Processing was completed on IBM system-34 machine. There were standalone systems in every Collectorate. Goods Declaration (then called B/E - bill of entry) was submitted and a machine number was allocated manually. For this purpose a register was maintained. Data, regarding value was used to be given by the computer operator in the form of an assessment sheet (at this stage some customs brokers managed to get values of their choice as values could be maneuvered by the computer operator). The print out of the sheet was attached with the bill of entry (B/E) and sent to the group in routine. After completion of B/E and out of customs charge, manual feeding in the computer system was done by KPOs of Computer Bureau through batch processing. In December, 1992 computer processing (online processing) of bill of entries was started in Appraisement Collectorate. Between 1992 and 1999 feeding of hard copies of B/Es in the computer was done by the KPOs of import section. A running number was allocated to B/Es by the computer, which was manually affixed on the B/E. Assessment was used to be made in the relevant groups and simultaneously entry was recorded in the computer. The system automatically calculated duty. A batch of eight bills of entry was forwarded to the bank with a summary. The bank verified that summary with pay orders which they received. Account section was responsible to make consignments out of customs charge after satisfying that all the taxes and duties are paid in the national exchequer. Pakistan Revenue Automation (Pvt.) Limited (PRAL) In June 1994 a new company, owned by Federal Board of Revenue, was established namely Pakistan Revenue Automation (Pvt.) Limited (PRAL) and was incorporated. It took over the earlier company, Systems Limited, in December 1996. The head office was located at 14, Hill Road, F-6/3 Islamabad, Pakistan. As far as I recollect Federal Board of Revenue allocated a budget of Rs. 700 million for PRAL in 2013-14. The purpose of PRAL was to develop in-house soft wares for customs and income tax departments. 1250 persons are employed, out of which 283 are working on customs side and rest on indirect taxation side (income tax and VAT). In 1996, PRAL purchased a new IBM machine AS-400. To facilitate traders, service centers were started during 1999 in Appraisement Collectorate and then at Port Qasim and Airport, Karachi. Subsequently these were also introduced at some dry ports (ICDs). Processing of B/E was started with feeding of data at service center by 11 computer operators. After that a number was allotted by a small printer. GD processing fee was charged @ Rs. 55/entry between 9 am to 10 am, Rs. 65 between 10 am to 3 pm and Rs. 300 after 3 pm. In 2000, traders started to bring data on floppy discs which were uploaded at service centers. As a result, long queues were minimized to a manageable size. E-filing PaCCS—Pakistan Customs Computerized System Conceptualization of WeBOC The conceptualization of a new home grown system was started few years back by the department. The new software was named e-Customs. The basic modules of GD filing, examination, clearance and associated jobs were tested. Two customs clearing agents were selected who volunteered to file some Goods Declarations (GD) through the new system. At that time machine number was allocated on filing of GD. Some processes were completed manually, as well. Besides testing the system, actual clearance was made on hard copy of GD after manual checking. Later, FBR constituted a team of customs officers to review this software. The team identified some basic flaws in new software with reference to customs law and business processes on ground. However, the officers showed reservations and recommended improvements. See also Pakistan Customs cricket team References External links Official Pakistan Customs Computerized System Pakistan Customs Federal Board of Revenue Pakistan federal departments and agencies Customs services
3370544
https://en.wikipedia.org/wiki/Kohan%3A%20Immortal%20Sovereigns
Kohan: Immortal Sovereigns
Kohan: Immortal Sovereigns is a real-time strategy video game developed by TimeGate Studios. It was published for Microsoft Windows by Strategy First, and ported to Linux by Loki Software, both in 2001. With a high fantasy setting, the game follows immortal beings named Kohan. It features a lengthy single-player campaign and skirmish maps playable in multiplayer or against the AI. The gameplay focuses on controlling companies instead of individual soldiers, a mechanic praised by critics for eliminating micromanagement. A sequel, Kohan II: Kings of War, was released in 2004. Gameplay The Kohan economy has five resources, of which gold, as the only resource which can be stockpiled, is the most important. The four secondary resources, stone, wood, iron, and mana, are used to support the military; if their production is insufficient, gold income will be decreased to accommodate. Resources are produced in settlements or in mines; mines can only be placed in predetermined locations. Settlements have a number of slots to be occupied by one of eight components; each produces a particular resource, or gives another benefit to the settlement. Settlements also determine the support limit, which represents the number of companies the player can support. The main military unit in Kohan is the company. Each company is led by a Captain, has four front line units, and can have up to two different support units. The units available for company creation depend on the components in the settlement where the company is being recruited. For each company, a recruitment cost must be paid in gold; furthermore, each unit in the company requires a certain amount of secondary resources to support itself. Companies are defined by experience, morale and formation. A company's support units and Kohan can provide additional modifiers, affecting attack strength, move speed, defense and other. Once a company engages in combat, each unit will fight individually. As long as a single unit survives combat, the company can eventually resupply to full strength. Units in Kohan are divided into six categories: infantry, cavalry, archer, specialty, support, and Hero elements. The first four categories can be both front line and support troops, while the fifth may only occupy support unit slots. The sixth category represents the Kohan, who are the most powerful units, and can only be put in the Captain slot. Each Kohan can provide several modifiers and cast several spells. Kohan have an experience stat separate from the companies' experience, which affect their abilities. If a Kohan dies, he may be resurrected, but will lose all experience. If no Kohan is available, a Captain without any special abilities will lead the company. Kohan can be detached from and attached to companies at any time if the company is in supply (see below). A significant element in Kohan are the three zones: Zone of Control (ZoC), Zone of Supply (ZoS) and Zone of Population (ZoP). Each company has a ZoC, which is based on formation. If a company's ZoC overlaps with an enemy company's ZoC, they will engage in combat. The ZoS is the area in which companies can be healed; it is provided by settlements, unless the settlement is under siege, and is based on a settlement's size and components. If a company's ZoC overlaps with a friendly ZoS, the company is considered "in supply" and will heal when out of combat. Each settlement also has a ZoP, representing the lands already inhabited. New settlements must be built outside the ZoP. Setting Kohan follows the story of a Kohan named Darius Javidan as he fights the rise of the Ceyah, Kohan tainted by evil, to re-establish Kohan society in Khaldun. According to Steve Hemmesch, TimeGate Studio's lead designer at the time, the storyline of Kohan was influenced by Persian mythology and Zoroastrianism. The Kohan are a group of immortals that the Creator tasked with protecting and fostering Khaldun. Although the Kohan can be killed with violence, they only remain dead until they are "awakened" through the use of individually assigned amulets. When the Creator desired to build a new world, he consulted the two greatest of his Saadya, angel-like beings, named Ahriman and Ormazd. Of the two plans proposed, Ormazd's best fit the Creator's vision and the remaining eight Saadya were ordered to create the world, which Ormazd had named Khaldun. During its construction, however, Ahriman, whose plan had been rejected, plotted Khaldun's downfall. While Kohan culture bloomed early on in Khaldun's history, it was destroyed in The Great Cataclysm when certain Kohan desired to be free from the will of the Creator. The Kohan defeated the Ceyah and the traitors were sent away from Kohan society. One Ceyah, Vashti, formerly known as Roxanna Javidan, Darius Javidan's wife, was particularly rebellious against the Creator. She murdered her husband and led the Ceyah armies, hoping to become a tyrant over all of Khaldun. Playable races There are seven distinct playable races in the Kohan series, all of which are common within the fantasy genre, though some have game-specific names. The Mareten (humans), Gauri (dwarves), Drauga (orcs), Haroun (elves), Slaan (lizardfolk), Undead, and Shadow have Kohan that resemble them, although supposedly all Kohan originally appeared human. It is explained that Kohan who dwell with a race for a number of years begin to take on their physical attributes. It is also said that Kohan who were enlightened could take on a War Form (Drauga like) or a Magic Form (Haroun like) in addition to their Normal Form (Maretan like) and that these races were descendants of Kohan while in those forms. The Gauri being descendants of Drauga and Haroun inheriting qualities of both. In Kohan and its expansion pack Kohan: Ahriman's Gift, the player can gain control of Gauri, Drauga, Haroun and Slaan settlements and control units from these races, but the player's main settlements are always Mareten settlements. Instead of selecting a playable race, the player selects a faction which has units unique to it. Players of the Ceyah faction can produce Undead and Shadow units as well as Mareten settlers and engineers. Reception The game received "generally favorable reviews" according to the review aggregation website Metacritic. It was praised for eliminating much of the micromanagement inherent in real-time strategy games while introducing new concepts to the genre, and for the strong AI opponents and multiplayer support. It was criticized for the somewhat lackluster world, and the "inability to establish a distinctive atmosphere." John Lee of NextGen said of the game, "Innovation and simplicity are the super attributes here, and even if you've pretty much seen all this before, it's still quite a ride." The Academy of Interactive Arts & Sciences nominated the game as the 2001 "Computer Strategy Game of the Year", an award that ultimately went to Civilization III. However, the game won PC Gamer US "Best Real-Time Strategy Game" and Computer Gaming Worlds "Best Strategy Game" awards that year, and was likewise named 2001's top real-time strategy game by Computer Games Magazine and GamePen. The staff of PC Gamer, Computer Gaming World and Computer Games Magazine praised the game's increased strategic depth compared to other real-time strategy titles; the last publication noted that Kohan "puts the 'strategy' back (if it ever truly was there in the first place) into real-time strategy". The game was nominated for the "Best Artificial Intelligence", "Most Innovative Game", "Best Single-Player Strategy Game", and "Best Multiplayer Strategy Game" awards at GameSpots Best and Worst of 2001 Awards, which went to Black & White, Shattered Galaxy (twice), and Civilization III, respectively. Expansion Kohan: Ahriman's Gift (known as Kohan: Battles of Ahriman in Europe) is a stand-alone expansion pack for Kohan released on November 5, 2001. The game allows play from an evil perspective, with the player leading armies of Undead and Shadowbeasts. It introduces an improved AI, new units and three new campaigns, as well as some new multiplayer maps and modes. However, it was criticized for not bringing enough new features to justify its cost. The main campaign of Ahriman's Gift serves as the prequel to the original game with the perspective from the evil Ceyah Kohan led by their champion Mistress Vashti, formerly Roxanna Javidan wife of Darius Javidan, the main protagonist of the original game. The Quest for Darius follows the story of Ilyana Aswan and her armies as she races against time and evil to recover the amulet of Darius Javidan, while the Slaanri campaign features the newly reawakened Slaanri champion, Slyy's Stok as he struggles to remember his past and unite the tribes of his people against an unknown enemy. Reception Ahriman's Gift received "generally favorable reviews", although moderately less than the original Kohan, according to Metacritic. Port and sequel The game was ported to Linux by Loki Software, shipping on August 24, 2001. A special edition was published in May 2002, featuring new heroes, maps and AI options, but not the expansion pack. A Kohan mod tool was released on June 17, 2002. A sequel, Kohan II: Kings of War, was released in 2004. A compilation, Kohan Warchest, is a download bundling the three Kohan titles Immortal Sovereigns, Ahriman's Gift, and Kings of War. It was released by Impulse in January 2011 and Steam in August 2011. References External links Kohan: Immortal Sovereigns at Loki Software The Awakening - a fan site 2001 video games Fantasy video games Linux games Loki Entertainment games Real-time strategy video games Strategy First games Ubisoft games Video games developed in the United States Video games with expansion packs Windows games TimeGate Studios games
1814464
https://en.wikipedia.org/wiki/0%20A.D.%20%28video%20game%29
0 A.D. (video game)
0 A.D. is a free and open-source real-time strategy video game under development by Wildfire Games. It is a historical war and economy game focusing on the years between 500 BC and 1 BC, with the years between 1 AD and 500 AD planned to be developed in the future. The game is cross-platform, playable on Windows, macOS, Linux, FreeBSD, and OpenBSD. It is composed entirely of free software and free media, using the GNU GPLv2 (or later) license for the game engine source code, and the CC BY-SA license for the game art and music. Gameplay 0 A.D. features the traditional real-time strategy gameplay components of building a base, developing an economy, training an army, engaging in combat, and researching new technologies. The game includes multiple units and buildings specific to each civilization as well as both land and naval units. During the game, the player advances from "village phase", to "town phase", to "city phase". The phases represent the sizes of settlements in history, and every phase unlocks new units, buildings, and technologies. Multiplayer functionality is implemented using peer-to-peer networking, without a central server. Development 0 A.D. originally began in 2001 as a comprehensive total conversion mod concept for Age of Empires II: The Age of Kings. The development team later decided that making the project as a mod was too limiting to their creative freedom, and elected to move their art and ideas to an in-house engine, making it a standalone game. The historical accuracy of the game elements has been the highest development priority. Unit and building names are shown in the original language of the civilization they belong to, and they are also translated into the language in which the user is playing the game. There is also a strong focus on attempting to provide a high visual accuracy of unit armor, weapons, and buildings. On 10 July 2009, Wildfire Games released the source code for 0 A.D. under the GNU GPLv2 (or later) license, and made the artwork available under the CC BY-SA license. There were around ten to fifteen people working on 0 A.D. around 23 March 2010; but since development started, over 100 people have contributed to the project. On 5 September 2013, an Indiegogo crowdfunding campaign was started with a goal. They raised a total of to be used to hire a programmer. The majority of the project's finances are managed by the Software in the Public Interest organisation. There is no official release date set for the finished version of the game. The composers of the music in the game are Omri Lahav, Jeff Willet, Mike Skalandunas, and Shlomi Nogay. A 26-track soundtrack was released on 8 June 2018. Reception In 2012, 0 A.D. received second place in the IndieDB Player's Choice Upcoming Indie Game of the Year competition. 0 A.D. has been generally well received. It was voted as LinuxQuestions.org "Open Source Game of the Year for 2013". Between 2010 and June 2021, the game was downloaded from SourceForge.net over 1.3 million times. See also Free and open source software Linux gaming List of free and open-source software packages List of open source games Notes References External links 0 A.D. Alpha 25 Trailer Creative Commons-licensed video games Crowdfunded video games Free software that uses SDL Indie video games Indiegogo projects Linux games MacOS games Multiplayer and single-player video games Multiplayer online games Open-source video games Strategy video games Real-time strategy video games Software that uses wxWidgets Upcoming video games Video games set in ancient Rome Video games set in antiquity Video games set in Egypt Video games set in Greece Video games set in India Video games set in Iran Windows games
2348164
https://en.wikipedia.org/wiki/Job%20scheduler
Job scheduler
A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional job and batch are distinguished and contrasted; see that page for details. Other synonyms include batch system, distributed resource management system (DRMS), distributed resource manager (DRM), and, commonly today, workload automation (WLA). The data structure of jobs to run is known as the job queue. Modern job schedulers typically provide a graphical user interface and a single point of control for definition and monitoring of background executions in a distributed network of computers. Increasingly, job schedulers are required to orchestrate the integration of real-time business activities with traditional background IT processing across different operating system platforms and business application environments. Job scheduling should not be confused with process scheduling, which is the assignment of currently running processes to CPUs by the operating system. Overview Basic features expected of job scheduler software include: interfaces which help to define workflows and/or job dependencies automatic submission of executions interfaces to monitor the executions priorities and/or queues to control the execution order of unrelated jobs If software from a completely different area includes all or some of those features, this software can be considered to have job scheduling capabilities. Most operating systems, such as Unix and Windows, provide basic job scheduling capabilities, notably by at and batch, cron, and the Windows Task Scheduler. Web hosting services provide job scheduling capabilities through a control panel or a webcron solution. Many programs such as DBMS, backup, ERPs, and BPM also include relevant job-scheduling capabilities. Operating system ("OS") or point program supplied job-scheduling will not usually provide the ability to schedule beyond a single OS instance or outside the remit of the specific program. Organizations needing to automate unrelated IT workload may also leverage further advanced features from a job scheduler, such as: real-time scheduling based on external, unpredictable events automatic restart and recovery in event of failures alerting and notification to operations personnel generation of incident reports audit trails for regulatory compliance purposes These advanced capabilities can be written by in-house developers but are more often provided by suppliers who specialize in systems-management software. Main concepts There are many concepts that are central to almost every job scheduler implementation and that are widely recognized with minimal variations: Jobs, Dependencies, Job Streams, and Users. Beyond the basic, single OS instance scheduling tools there are two major architectures that exist for Job Scheduling software. Master/Agent architecture — the historic architecture for Job Scheduling software. The Job Scheduling software is installed on a single machine (Master), while on production machines only a very small component (Agent) is installed that awaits commands from the Master, executes them, then returns the exit code back to the Master. Cooperative architecture — a decentralized model where each machine is capable of helping with scheduling and can offload locally scheduled jobs to other cooperating machines. This enables dynamic workload balancing to maximize hardware resource utilization and high availability to ensure service delivery. History Job Scheduling has a long history. Job Schedulers have been one of the major components of IT infrastructure since the early mainframe systems. At first, stacks of punched cards were processed one after the other, hence the term "batch processing". From a historical point of view, we can distinguish two main eras about Job Schedulers: The mainframe era Job Control Language (JCL) on IBM mainframes. Initially based on JCL functionality to handle dependencies, this era is typified by the development of sophisticated scheduling solutions (such as Job Entry Subsystem 2/3) forming part of the systems management and automation toolset on the mainframe. The open systems era Modern schedulers on a variety of architectures and operating systems. With standard scheduling tools limited to commands such as at and batch, the need for mainframe standard job schedulers has grown with the increased adoption of distributed computing environments. In terms of the type of scheduling there are also distinct eras: Batch processing - the traditional date and time based execution of background tasks based on a defined period during which resources were available for batch processing (the batch window). In effect the original mainframe approach transposed onto the open systems environment. Event-driven process automation - where background processes cannot be simply run at a defined time, either because the nature of the business demands that workload is based on the occurrence of external events (such as the arrival of an order from a customer or a stock update from a store branch), or because there is no / insufficient batch window. Service Oriented job scheduling - recent developments in Service Oriented Architecture (SOA) have seen a move towards deploying job scheduling as a reusable IT infrastructure service that can play a role in the integration of existing business application workload with new Web Services based real-time applications. Scheduling Various schemes are used to decide which particular job to run. Parameters that might be considered include: Job priority Compute resource availability License key if job is using licensed software Execution time allocated to user Number of simultaneous jobs allowed for a user Estimated execution time Elapsed execution time Availability of peripheral devices Occurrence of prescribed events Job dependency File dependency Operator prompt dependency See also References
28771236
https://en.wikipedia.org/wiki/Joyent
Joyent
Joyent Inc. was a software and services company based in San Francisco, California. Specializing in cloud computing, it marketed infrastructure-as-a-service. On June 15, 2016, the company was acquired by Samsung Electronics. Services Triton, Joyent's hosting unit, was designed to compete with Amazon's Elastic Compute Cloud (EC2) and offered infrastructure as a service (IaaS) and platform as a service (PaaS) for large enterprises. This hosting business was used for online social network gaming, where it provides services to companies such as THQ, Social Game Universe, and Traffic Marketplace. The company also hosted Twitter in its early days. Other customers include LinkedIn, Gilt Groupe, and Kabam. In June 2013 Joyent introduced an object storage service under the name Manta and partnered in September 2013 with network appliance vendor Riverbed to offer an inexpensive content-delivery network. In February 2014, Joyent announced a partnership with Canonical to offer virtual Ubuntu machines. Software Joyent uses and supports open source projects, including Node.js, Illumos and SmartOS, which is its own distribution of Illumos, featuring its port of the KVM Hypervisor for abstracting the software from the hardware, DTrace for troubleshooting and systems monitoring, and the ZFS file system to connect servers to storage systems. The company open-sourced SmartOS in August 2011. Joyent took software that evolved over time in the running of their hosted business and licensed that software under the name Triton DataCenter (formerly "Triton Enterprise", "SDC" or "SmartDataCenter") to large hardware companies such as Dell. History The name Joyent was coined by David Paul Young in the second half of 2004, and some early funding obtained from Peter Thiel. More funding was disclosed in July 2005 with Young as executive officer and director. One of the early products was an online collaboration tool named Joyent Connector, an unusually large Ruby on Rails application, which was demonstrated at the Web 2.0 Conference in October 2005, launched in March 2006, open sourced in 2007, and discontinued in August 2011. In November 2005, Joyent merged with TextDrive. Young became the chief executive of the merged company, while TextDrive CEO Dean Allen, a resident of France, became president and director of Joyent Europe. Jason Hoffman (from TextDrive), serving as the merged company's chief technical officer, spearheaded the move from TextDrive's initial focus on application hosting to massively distributed systems, leading to a focus on cloud computing software and services to service providers. Allen left the company in 2007. Young left the company in May 2012, and Hoffman took over as interim chief executive until the appointment of Henry Wasik in November 2012. Hoffman stepped down from his position as the company's chief technical officer in September 2013 and took a new position at Ericsson the next month. Bryan Cantrill was appointed CTO in his place in April 2014, with Mark Cavage assuming Cantrill's former VP engineering role. The company has a history of acquisitions and divestments. In 2009, Joyent acquired Reasonably Smart, a cloud startup company with products based on JavaScript and Git. In 2009, it sold off both Strongspace and Bingodisk to ExpanDrive. In 2010, Joyent purchased LayerBoom, a Vancouver-based startup that provides solutions for managing virtual machines running on Windows and Linux. On June 16, 2016, Samsung announced that it was acquiring Joyent. On June 6, 2019, Joyent announced that their Triton public cloud would be shut down on November 9, 2019. Financing In 2004, TextDrive bootstrapped itself as a hosting company through crowd funding: customers were invited to invest money in exchange for free hosting for the lifetime of the company. TextDrive and, later, Joyent repeated the money-raising procedure a number of times in order to avoid the venture capital market. and began to flounder, suffering from an absence of leadership and plagued by reliability issues, with users leaving for other hosts. Joyent raised venture capital for the first time in November 2009 from Intel and Dell. Joyent's early institutional investors include El Dorado Ventures, Epic Ventures, Intel Capital (series A, B Rounds), Greycroft Partners (Series A, B Rounds), Liberty Global (Series B Round). In January 2012, Joyent secured a new round of funding totalling $85 million from Weather Investment II, Accelero Capital, and Telefónica Digital. In October 2014, Joyent raised an additional $15 million in series D funding from existing investors. References External links Joyent 2004 establishments in California American companies established in 2004 Samsung software Software companies of the United States Software companies based in the San Francisco Bay Area Software companies established in 2004 2016 mergers and acquisitions American subsidiaries of foreign companies Cloud computing providers Companies based in San Francisco Samsung Electronics
54076885
https://en.wikipedia.org/wiki/David%20J.%20Hickton
David J. Hickton
David J. Hickton (born August 14, 1955) is the director and founder of the University of Pittsburgh Institute for Cyber Law, Policy and Security. Prior to that, he was the 57th U.S. Attorney for the Western District of Pennsylvania. He resigned following the election of President Donald Trump and began his position at Pitt in January 2017. While a U.S. Attorney, Hickton brought several indictments for cybertheft and hacking. He also played a key role in combating the opioid abuse epidemic in Western Pennsylvania. Prior to becoming U.S. Attorney, Hickton engaged in the private practice of law, specifically in the areas of transportation, litigation, commercial and white collar crime. Early life and education Hickton was born on August 14, 1955 in Columbus, Ohio. He received his undergraduate degree from Pennsylvania State University and his Juris Doctor degree from the University of Pittsburgh School of Law, where he met his wife Dawne Eileen Sepanski Hickton. Career Hickton began his legal career as a law clerk for U.S. District Judge Gustave Diamond from 1981 to 1983. For eleven years, Hickton was an adjunct professor at the Duquesne University School of Law, where he taught a course on antitrust law. He served on the Board of Trustees at Penn State University from 1977 to 1980. He was nominated as United States Attorney for Western Pennsylvania by President Barack Obama on May 3, 2010, and confirmed by the U.S. Senate on August 5, 2010. In May, 2014, Hickton's office brought an indictment against five members of the Chinese People's Liberation Army, alleging economic espionage. The defendants were charged with hacking into American entities to steal trade secrets and other information that would be useful to Chinese competitors. Victims included Westinghouse Electric Company, the U.S. Steel, Alcoa, Inc., and Allegheny Technologies. His office also indicted Russian hacker Evgeniy Bogachev, one of the world's leading cyber criminals. In July 2015, his office, in cooperation with the U.S. Federal Bureau of Investigation (FBI) and legal authorities in 19 other countries, shut down Dark0de, a cybercrime forum and black marketplace for security hackers. Darkode offered malware to disrupt operations in computer systems in several countries, and offered stolen data ranging from U.S. Social Security numbers to passwords. In June 2015, Hickton and his office brought forth a 21 count indictment of conspiracy, money laundering, wire fraud, and identity theft against Cuban national Yoandy Perez Llanes. In 2016, Mr. Llanes was extradited from Venezuela to the U.S., and later in 2017, Llanes and a cohort Soler Nodarse, plead guilty for their part in a $2.2 million scheme where hackers stole an estimated 62,000 tax forms of UPMC employees and sold them on the dark web. While U.S. Attorney for Western Pennsylvania, Hickton was named to co-chair a national Heroin Task Force. In 2014, he formed the U.S. Attorney's Working Group on Addiction: Prevention, Intervention, Treatment, and Recovery. His office worked with the University of Pittsburgh to post information online about lethal batches of heroin. In 2011, Hickton assembled a Community Police Working Group to help build trust between members of law enforcement and the public. The group held community-based meetings, developed a Crisis Team, and distributed thousands of surveys to elicit community feedback about community-police relations and safety. In 2015, Pittsburgh was selected by Attorney General Loretta Lynch as one of six pilot cities for the National Initiative for Building Community Trust and Justice. Following a three-year investigation, Hickton secured fundamental changes in the Pennsylvania Department of Corrections (PDOC) to humanely address the issues of unconstitutional confinement conditions for those suffering from serious mental illness and for victims of institutional sexual assault. He also initiated a case study in police-community reconciliation in the neighborhood of Homewood. In 2015, Hickton led an investigation of Education Management Corporation (EDMC), which resulted in the recovery of $95.5 million, the largest False Claims Act (FCA) recovery of Department of Education funds. In 2016, Hickton successfully prosecuted former CEO and founder of Pennsylvania Cyber Charter School Nicholas Trombetta, who plead guilty of conspiracy, where he took approximately $8 million of educational funds for illegal use. In 2016, Hickton also lead the prosecution of Michael J. Ruffatto, who transferred to his personal bank account $5.7 million in funds from U.S. Department of Energy's National Energy Technology Laboratory that was awarded to the North American Power Group. At the request of Linda Lane, Superindent of Pittsburgh Public Schools, Hickton responded to an incident of cross burning in front of an elementary school in the city. He said, the incident was a "notorious sign of hate.." and that the US Civil Rights Section and the FBI were also participating in the investigation. Awards and recognition Hickton and his wife Dawne have been involved in the Loren H. Roth, MD, Summer Research Program in the School of Medicine. Pitt declared Hickton as a Legacy Laureate in 2013. He also received a 225th anniversary medallion, an honor bestowed on alumni who have brought particular honor to the University through their work and service. In 2016, Pitt's School of Law named Hickton one of its Distinguished Alumni. That same year, Hickton was named the Attorney of the Year by The Legal Intelligencer. On January 27, 2017, at the 19th Annual LEAD (Law Enforcement Agency Directors) Awards, Hickton was presented with a special recognition award for his work in the areas of national security and cyber crime. Hickton is a Fellow in the American College of Trial Lawyers and a Fellow of the Academy of Trial Lawyers of Allegheny County. He has been admitted before the U.S. Supreme Court, the Pennsylvania Supreme Court, the U.S. District Court for the Western District of Pennsylvania, and several of the U.S. circuit courts. In 2013, Hickton and his wife donated $1 million to establish an endowment for the University of Pittsburgh's Elder Law Clinic, a clinic which is designed to teach law students practical skills while providing free legal services to low income older adults and their family members. Under President Bill Clinton, Hickton served on the President's Advisory Committee on the Arts for the John F. Kennedy Center for the Performing Arts. He is an executive board member of the Pittsburgh Public Theater and also served as its president. He was a longtime member of the Pittsburgh Cultural Trust, a non-profit agency that works to promote arts and culture in Downtown Pittsburgh. He and his wife have six children and two grandchildren. References 1955 births Living people Pennsylvania Attorneys General Pennsylvania politicians People from Columbus, Ohio Pittsburgh United States Attorneys for the Western District of Pennsylvania University of Pittsburgh faculty University of Pittsburgh School of Law alumni
55351413
https://en.wikipedia.org/wiki/Microsoft%20Office%202019
Microsoft Office 2019
Microsoft Office 2019 (Second perpetual release of Office 16) is a version of Microsoft Office for both Windows and Mac. It is the successor to Office 2016 and was succeeded by Office 2021 on October 5, 2021. It was released to general availability for Windows 10 and for macOS on September 24, 2018. Some features that had previously been restricted to Office 365 subscribers are available in this release. Office 2019 remain same major version 16 as predecessor Office 2016, therefore it is second perpetual release of Office 16. Mainstream support for Office 2019 will end on October 10, 2023. Unlike other versions of Microsoft Office, Office 2019 will only get two years of extended support, which means that support for Office 2019 will end on the same day as support for Office 2016 will, on October 14, 2025. History On April 27, 2018, Microsoft released Office 2019 Commercial Preview for Windows 10. On June 12, 2018, Microsoft released a preview for macOS. New features Office 2019 includes many of the features previously published via Office 365, along with improved inking features, LaTeX support in Word, new animation features in PowerPoint including the morph and zoom features, and new formulas and charts in Excel for data analysis. OneNote is absent from the suite as the Universal Windows Platform (UWP) version of OneNote bundled with Windows 10 replaces it. OneNote 2016 can be installed as an optional feature on the Office Installer. For Mac users, Focus Mode will be brought to Word, 2D maps will be brought to Excel and new Morph transitions, SVG support and 4K video exports will be coming to PowerPoint, among other features. Despite being released in the same month, the new Office user interface in Word, Excel, PowerPoint, and Outlook is only available to Office 365 subscribers, not perpetual Office 2019 licenses. Editions Traditional editions Same as its predecessor Microsoft Office 2016, Microsoft Office 2019 has the same perpetual SKU editions aimed towards different markets. Like its predecessor, Microsoft Office 2019 contains Word, Excel, PowerPoint and OneNote and is licensed for use on one computer. 5 perpetual SKU editions of Office 2019 were released for Windows: Home & Student: This retail suite includes the core applications only – Word, Excel, PowerPoint, OneNote. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications, as well as Outlook and Publisher. Professional: This retail suite includes the core applications, as well as Outlook, Publisher, and Access. Professional Plus: This suite includes the core applications, as well as Outlook, Publisher, Access, and Skype for Business. This edition is available through retail channels (Developer tools subscription like MSDN subscription & Visual Studio subscription)and volume licensing channels. Unlike its predecessor, both Windows version retail & volume versions use the Click-to-Run (C2R) for installation. Also unlike its predecessor, all Windows version Office 2019 require only Windows 10 and Windows Server 2019. Like its predecessor, three traditional editions of Office 2019 were released for Mac (macOS Sierra or later): Home & Student: This retail suite includes the core applications only. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications and Outlook. Deployment Office 2019 requires Windows 10, Windows Server 2019 or macOS Mojave and later. macOS installations can be acquired from the Microsoft website or the Mac App Store. For Office 2013 and 2016, various editions containing the client apps were available in both Click-To-Run (inspired by Microsoft App-V) and traditional Windows Installer setup formats. However, Office 2019 client apps only have a Click-to-Run installer and only the server apps have the traditional MSI installer. The Click-To-Run version has a smaller footprint; in case of Microsoft Office 2019 Pro Plus, the product requires 10 GB less than the MSI version of Office 2016 Pro Plus. Volume licensing versions of Office 2019 cannot be downloaded from Microsoft Volume Licensing Service Center and must be deployed using configuration.xml and running Office Deployment Tool (ODT) from command-line. OS Support All releases are available for download in the Update history for Office for Mac (and Update history for Office 2016 for Mac) See also List of office suites List of typefaces included with Microsoft Windows (list of Office Cloud fonts continues in footnote) References 2018 software 2019 Office 2019 Office suites for Windows
172369
https://en.wikipedia.org/wiki/Challenger%202
Challenger 2
The FV4034 Challenger 2 (MOD designation "CR2") is a third generation British main battle tank (MBT) in service with the armies of the United Kingdom and Oman. It was designed and built by the British company Vickers Defence Systems (now known as BAE Systems Land & Armaments). Vickers Defence Systems began to develop a successor to the Challenger 1 as a private venture in 1986. The Ministry of Defence ordered a prototype in December 1988. In June 1991, the MoD placed an order for 140 vehicles, with a further 268 ordered in 1994. Production began in 1993 and the unit's tanks were delivered in July 1994, replacing the Challenger 1. After a production delay, the tank entered service with the British Army in 1998, with the last delivered in 2002. The Challenger 2 was also exported to Oman. The Challenger 2 is an extensive redesign of the Challenger 1. Although the hull and automotive components seem similar, they are of a newer design than for the Challenger 1 and only around 3% of components are interchangeable. A visual recognition feature is the armoured housing for the TOGS thermal gunsight: the Challenger 2 has this above the gun barrel, the Challenger 1 has it at the right hand side of the turret. The tank has a 550 km range and maximum road speed of 59 km/h. The Challenger 2 is equipped with a 55-calibre long L30A1 tank gun, the successor to the L11 gun used on the Chieftain and Challenger 1. Unique among NATO main battle tank armament, the L30A1 is rifled, because the British Army continues to place a premium on the use of high-explosive squash head (HESH) rounds in addition to armour-piercing fin-stabilised discarding-sabot rounds. The Challenger 2 is also armed with a L94A1 EX-34 7.62 mm chain gun and a 7.62 mm L37A2 (GPMG) machine gun. Fifty main armament rounds and 4,200 rounds of 7.62 mm ammunition are carried. The Challenger 2 has a four-man crew. The turret and hull are protected with second generation Chobham armour (also known as Dorchester). To date, the only time the tank has been destroyed during operations was by another Challenger 2 in a "blue on blue" (friendly fire) incident at Basra in 2003 when the destroyed tank had its hatch open at the time of the incident. It has seen operational service in Bosnia and Herzegovina, Kosovo and Iraq. Since entering service, various upgrades have sought to improve the Challenger 2's protection, mobility and lethality, the most recent of which was the Life Extension Programme (LEP). In March 2021, the British Army announced plans to upgrade 148 Challenger 2s under LEP with the aim to extend its service life out to at least 2035; these upgraded models will be known as Challenger 3. It is not planned to upgrade all Challenger 2s; those not upgraded will be retired. History The Challenger 2 is the third vehicle of this name, the first being the A30 Challenger, a World War II design using the Cromwell tank chassis with a 17-pounder gun. The second was the Persian Gulf War era Challenger 1, which was the British army's main battle tank (MBT) from the early 1980s to the mid-1990s. Vickers Defence Systems began to develop a successor to Challenger 1 as a private venture in 1986. Following the issue of a Staff Requirement for a next-generation tank, Vickers submitted its plans for the Challenger 2 to the Ministry of Defence (MoD). Vicker's indigenous design was received skeptically by some senior MoD officials, and was evaluated against the American M1 Abrams offered by General Dynamics. After some supportive lobbying by Baron Young, the Thatcher government chose to proceed with the Challenger 2 in December 1988. Vickers received a £90 million contract for a demonstrator vehicle to be delivered by September 1990. The demonstration phase had three milestones for progress, with dates of September 1989, March 1990, and September 1990. At the last of these milestones, Vickers was to have met 11 key criteria for the tank's design. The Challenger I's performance in the Gulf War bolstered the MoD's confidence in British armour. The MoD evaluated the American M1A2 Abrams, the French Leclerc and the West German Leopard 2 against the Challenger 2. The MoD rejected these alternatives, and in June 1991 the MoD placed a £520 million order for 127 MBTs and 13 driver training vehicles. An order for a further 259 tanks and 9 driver trainers (worth £800 million) was placed in 1994. Vickers struggled to market the tank for export. Its one success led to Oman ordering 38 Challenger 2s; 18 in June 1993 and a further 20 in November 1997. Both batches ordered by Oman contain notable differences to the UK version: a larger cooling group and rear towing eyes, running gear and bazooka plates similar to Challenger 1, and a loader's Browning 0.5 Calibre M2 Heavy Machine Gun. Deliveries of Challenger 2 to Oman were completed in 2001. Production began in 1993 at two primary sites: Elswick, Tyne and Wear and Barnbow, Leeds, although over 250 subcontractors were involved. The first tanks were delivered in July 1994. The Challenger 2 failed its acceptance trials in 1994, and it was forced into the Progressive Reliability Growth Trial in 1995. Three vehicles were tested for 285 simulated battlefield days. The tank was then accepted into service in 1998. An equally important milestone was the In-Service Reliability Demonstration (ISRD) in 1998. The CR2 In-Service Reliability Demonstration (ISRD) milestone was successfully achieved in January 1999. The ISRD took place from September to December 1998. 12 fully crewed tanks were tested at the Bovington test tracks and at Lulworth Bindon ranges. The tank exceeded all staff requirements. The Challenger 2 entered service with the British Army in June 1998 (with the Royal Scots Dragoon Guards), with the last delivered in 2002. After the Army 2020 restructuring only three Challenger 2 Tank Regiments will remain: the Queen's Royal Hussars, the King's Royal Hussars and the Royal Tank Regiment, each of which is the tank regiment of an Armoured Infantry Brigade. A single Army Reserve regiment, The Royal Wessex Yeomanry, will provide reservist Challenger crews to the regular regiments. The Trojan minefield breaching vehicle and the Titan bridge-laying vehicle based on the chassis of the Challenger 2 were shown in November 2006; 66 are to be supplied by BAE Systems to the Royal Engineers, at a cost of £250 million. A British military document from 2001 indicated that the British Army would not procure a replacement for the Challenger 2 because of a lack of foreseeable conventional threats in the future. However, IHS Jane's 360 reported on 20 September 2015 that following discussions with senior Army officers and procurement officials at DSEI 2015 and the head of the British Army, General Sir Nick Carter, that the British Army was looking at either upgrading the Challenger 2 or outright replacing it. Sources confirmed that the future of the MBT was being considered at the highest levels of the Army. This stemmed from the British Army's concern with the new Russian T-14 Armata main battle tank and the growing ineffectiveness of the ageing L30 rifled gun and the limited types of ammunition supported. Further, it was confirmed that numerous armoured vehicle manufacturers had discussions with the MoD about a potential replacement for the Challenger 2. Shortly after, the British Army decided that purchasing a new tank would be too expensive and chose to proceed with a Challenger 2 life extension project (LEP). The Challenger 2 is expected to remain in service until 2025. maintenance and overhaul of the Challenger 2 is undertaken by Babcock Defence Support Group, and design authority for the tank is held by BAE Systems. Design Armament The Challenger 2 is equipped with a 55-calibre long L30A1 tank gun, the successor to the L11 gun used on Chieftain and Challenger 1. The gun is made from high strength electro-slag remelting (ESR) steel with a chromium alloy lining and, like earlier British 120 mm guns, it is insulated by a thermal sleeve. It is fitted with a muzzle reference system and fume extractor, and is controlled by an all-electric control and stabilization system. The turret has a rotation time of 9 seconds through 360 degrees. Uniquely among NATO main battle tank armament, the L30A1 is rifled and along with its predecessor, Royal Ordnance L11A5, the only Third Generation Main Battle Tank Guns to use a rifled barrel. This is because the British Army continues to place a premium on the use of high explosive squash head (HESH) rounds in addition to Armour-piercing fin-stabilized discarding-sabot rounds. HESH rounds have a longer range (up to further) than APFSDS, and are more effective against buildings and thin-skinned vehicles. Forty-nine main armament rounds are carried in the turret and hull; these are a mix of L27A1 APFSDS (also referred to as CHARM 3), L31 HESH and L34 white phosphorus smoke rounds, depending on the situation. As with earlier versions of the 120 mm gun, the propellant charges are loaded separately from the shell or KE projectile. A combustible rigid charge is used for the APFSDS rounds and a combustible hemicylindrical bag charge for the HESH and Smoke rounds. An electrically fired vent tube is used to initiate firing of the main armament rounds. (The main armament ammunition is thus described to be "three-part ammunition", consisting of the projectile, charge and vent tube.) The separation of ammunition pieces also aids in ensuring lower chances of unfired ammunition detonating prematurely. The Challenger 2 is also armed with a L94A1 EX-34 7.62 mm chain gun coaxially to the left of the main gun, and a 7.62 mm L37A2 (GPMG) machine gun mounted on a pintle on the loader's hatch ring. 4,200 rounds of 7.62 mm ammunition are carried. The Challenger can also mount a Leonardo "Enforcer" remote control weapons system bearing a 7.62 mm L37A2 (GPMG) machine gun, a 12.7mm heavy machine gun or a 40mm automatic grenade launcher. Fire control and sights The digital fire control computer from Computing Devices Co of Canada contains two 32-bit processors with a MIL STD1553B databus, and has capacity for additional systems, such as a Battlefield Information Control System. The commander has a panoramic SAGEM VS 580-10 gyrostabilised sight with laser rangefinder. Elevation range is +35° to −35°. The commander's station is equipped with eight periscopes for 360° vision. The Thermal Observation and Gunnery Sight II (TOGS II), from Thales, provides night vision. The thermal image is displayed on both the gunner's and commander's sights and monitors. The gunner has a stabilised primary sight using a laser rangefinder with a range of to . The driver's position is equipped with a Thales Optronics image-intensifying Passive Driving Periscope (PDP) for night driving and a rear view thermal camera. Protection The Challenger 2 is a heavily armoured and well protected tank. The turret and hull are protected by second-generation Chobham armour (also known as Dorchester), the details of which are classified but which is said to have a mass efficiency more than twice that of rolled homogeneous armor against high explosive anti tank projectiles. Crew safety was paramount in the design, using a solid state electric drive for its turret and gun movement, instead of hydraulic systems that may leak fluid into the crew compartment. Explosive reactive armour kits and additional bar armour may be fitted as necessary. The nuclear, biological and chemical (NBC) protection system is located in the turret bustle. The tank's shape is also designed to minimise its radar signature. On each side of the turret are five L8 smoke grenade dischargers. The Challenger 2 can create smoke by injecting diesel fuel into the exhaust manifolds. Drive system The tank's drive system comprises: Engine: Perkins 26.1 litre, 60° Vee, twin turbo-charged, CV12-6A four-stroke, four valve per cylinder (pushrod), direct injection diesel engine delivering at 2300rpm. Torque 4126 Nm at 1700rpm. The engine and gearbox are controlled by a Petards Vehicle Integrated Control System (VICS). Gearbox: David Brown Santasalo TN54E epicyclical transmission (6 fwd, 2 rev.) rated at 1200 bhp and upgradable to 1500 bhp. Suspension: Horstman Defence Systems second (current) or third-generation (future) hydrogas suspension units (HSU). Track: William Cook Defence hydraulically adjustable TR60 414FS double-pin. Maximum speed: on road; cross country Range: on road with external fuel; cross country on internal fuel. The tank is fitted with an Extel Systems Wedel APU (Auxiliary Power Unit, also referred to as a GUE [Generating Unit Engine]) based around a 38 kW Perkins P404C-22 diesel engine, with a 600 A electrical output, which can be used to power the vehicle's electrical systems when it is stationary and the main engine is switched off. This replaces the Perkins P4.108 engine fitted when the tank was first introduced. The use of an APU allows fuel consumption to be reduced, and lowers the audio and thermal signature of the vehicle. By 2013 the British Army had, at various events featuring the Challenger 2, begun to state the on-road range as 550 km as opposed to an earlier stated value of 450 km. They also publicly stated a maximum road speed of 59 km/h while equipped with 15 tons of additional modules. Crew and accommodation The British Army maintained its requirement for a four-man crew (including a loader) after risk analysis of the incorporation of an automatic loader suggested that auto-loaders reduced battlefield survivability. Mechanical failure and the time required for repair were prime concerns. Similar to every British tank since the Centurion, and most other British AFVs, Challenger 2 contains a boiling vessel (BV) for water for use preparing and heating food and drink. The only armed forces with BVs in armoured vehicles are those of Britain and India. Operational history The Challenger 2 had been used in peacekeeping missions and exercises before, but its first combat use came in March 2003 during the invasion of Iraq. 7th Armoured Brigade, part of 1st Armoured Division, was in action with 120 Challenger 2s around Basra. The type saw extensive use during the siege of Basra, providing fire support to the British forces and knocking out Iraqi tanks, mainly T-54/55s. The problems that had been identified during the large Saif Sareea II exercise, held 18 months earlier, had been solved by the issuing of Urgent Operational Requirements for equipment such as sand filters and so during the invasion of Iraq the tank's operational availability was improved. During the 2003 invasion of Iraq, the Challenger 2 tanks suffered no tank losses to Iraqi fire. In one encounter within an urban area, a Challenger 2 came under attack from irregular forces with machine guns and rocket propelled grenades. The driver's sight was damaged and while attempting to back away under the commander's directions, the other sights were damaged and the tank threw its tracks entering a ditch. It was hit by 14 rocket propelled grenades from close range and a MILAN anti-tank missile. The crew survived, safe within the tank until it was recovered for repairs, the worst damage being to the sighting system. It was back in operation six hours later. One Challenger 2 operating near Basra survived being hit by 70 RPGs in another incident. 25 March 2003: A friendly fire ("blue-on-blue") incident in Basra in which one Challenger 2 of the Black Watch Battlegroup (2nd Royal Tank Regiment) mistakenly engaged another Challenger 2 of the Queen's Royal Lancers after detecting what was believed to be an enemy flanking manoeuvre on thermal equipment. The attacking tank's second HESH round hit the open commander's hatch lid of the QRL tank sending hot fragments into the turret, killing two crew members. The hit caused a fire that eventually ignited the stowed ammunition, destroying the tank. This is only Challenger 2 to be destroyed on operations. August 2006: An RPG-29 capable of firing a tandem-charge penetrated the frontal lower underbelly armour of a Challenger 2 commanded by Captain Thomas Williams of The Queens's Royal Hussars south east of al-Amarah, southern Iraq. Its driver, Trooper Sean Chance, lost part of his foot in the blast; two more of the crew were slightly injured. Chance was able to reverse the vehicle to the regimental aid post despite his injuries. The incident was not made public until May 2007; in response to accusations that crews had been told the tank was impervious to the insurgents' weapons, the MoD said "We have never claimed that the Challenger 2 is impenetrable." Since then, the explosive reactive armour has been replaced with Chobham Armour and the steel underbelly lined with armour as part of the 'Streetfighter' upgrade as a direct response to this incident. 6 April 2007: in Basra, Iraq, a shaped charge from an IED penetrated the underside of a tank resulting in the driver losing a leg and causing minor injuries to another soldier. To help prevent incidents of this nature, Challenger 2s have been upgraded with a new passive armour package, including the use of add-on armour manufactured by Rafael Advanced Defense Systems of Israel. When deployed on operations the Challenger 2 is now normally upgraded to TES (Theatre Entry Standard), which includes a number of modifications including armour and weapon system upgrades. Upgrades CLIP The Challenger Lethality Improvement Programme (CLIP) was a programme to replace the current L30A1 rifled gun with the smoothbore Rheinmetall 120 mm gun currently used in the Leopard 2, M1 Abrams and K2 Black Panther. The use of a smoothbore weapon would have allowed Challenger 2 to use NATO standard ammunition, including tungsten-based kinetic energy penetrators which do not have the same political and environmental objections as depleted uranium rounds. The production lines for rifled 120mm ammunition in the UK have been closed for some years so existing stocks of ammunition for the L30A1 are finite. A single Challenger 2 was fitted with the L55 and underwent trials in January 2006. The smoothbore gun was the same length as the L30A1 and was fitted with the rifled gun's cradle, thermal sleeve, bore evacuator and muzzle reference system. Early trials apparently revealed that the German tungsten DM53 round was more effective than the depleted-uranium CHARM 3. The ammunition storage and handling arrangements had to be changed to cater for the single-piece smoothbore rounds, instead of the separate-loading rifled rounds. Other improvements were also considered, including a regenerative NBC protection system. CSP / LEP / Challenger 3 In 2005, the MOD recognised a need for a Capability Sustainment Programme (CSP) to extend the service life of the Challenger 2 into the mid-2030s and upgrade its mobility, lethality and survivability. The CSP was planned to be complete by 2020 and was to combine all the upgrades from CLIP, including the fitting of a 120 mm smoothbore gun. By 2014, the CSP programme had been replaced by the Life Extension Programme (LEP) which shared a similar scope of replacing obsolete components and extending the tank's service life from 2025 to 2035, however the 120 mm smoothbore gun had seemingly been abandoned. In 2015, the British Army provided an insight into the scope of the LEP, dividing it into four key areas, namely: Surveillance and Target Acquisition: Upgrades to the commander's primary sight and gunner's primary sight, as well as the replacement of the thermal observation and gunnery sights (TOGS) with third-generation thermal imaging. Weapon Control System: Upgrades to the fire control computer, fire control panel and gun processing unit. Mobility: Upgrades including third-generation hydrogas suspension, improved air filtration, CV-12 common rail fuel injection, transmission and cooling. Electronic Architecture: Upgrades to the gunner's control handles, video distribution architecture, generic vehicle architecture compliant interfaces, increased on-board processing and improved human machine interface. The MOD also began assessing active protection systems (APS) on the Challenger 2, including MUSS and Rheinmetall's ROSY Rapid Obscurant System. In August 2016, the MOD awarded assessment phase contracts to several companies for the Life Extension Programme. These included Team Challenger 2 (a consortium led by BAE Systems and including General Dynamics UK), CMI Defence and Ricardo plc, Rheinmetall and Lockheed Martin UK. In November, the MOD shortlisted two teams led by BAE Systems and Rheinmetall to compete for the LEP which was then estimated to be worth £650 million ($802 million). In October 2018, BAE Systems unveiled its proposed Challenger 2 LEP technology demonstrator, the "Black Night". The new improvements included a Safran PASEO commander's sight, Leonardo thermal imager for the gunner and Leonardo DNVS 4 night sight. The turret also received modifications to improve the speed of traverse and to provide greater space as well as regenerative braking to generate and store power. Other enhancements included a laser warning system and an active protection system. Months later, in January 2019, Rheinmetall unveiled its proposal which included the development of a completely new turret with fully digital electronic architecture, day and night sights for the commander and gunner, and a Rheinmetall L55 120mm smoothbore gun. Whilst a more substantial upgrade than Black Night, the turret was developed on Rheinmetall's initiative and was not funded by the UK MOD, nor was it part of the MOD's LEP requirements. In June 2019, BAE Systems and Rheinmetall formed a joint venture company, based in the UK, named Rheinmetall BAE Systems Land (RBSL). Despite the merger, the company was still expected to present two separate proposals for the LEP contract, however, at DSEI 2019, RBSL instead opted to only showcase the Rheinmetall proposal. In October 2020, the MOD argued against buying a new main battle tank from overseas instead of pursuing the Challenger 2 LEP, stating that an upgraded Challenger 2 would be "comparable – and in certain areas superior" to a Leopard 2 or Abrams. On 22 March 2021, the MoD published its long-awaited command paper, Defence in a Competitive Age, which confirmed the British Army's plans to upgrade 148 Challenger 2 tanks and designate them Challenger 3. The MoD confirmed the contract with RBSL had been signed, valued at £800 million (US$1 billion), on 7 May 2021. Rheinmetall's more extensive upgrade proposal, including the new 120 mm smoothbore gun, had been accepted. The initial operating capability for the upgraded tanks is expected by 2027, with full operation capability expected to be declared by 2030. HAAIP Updates to the automotive components of Challenger 2 and its associated variants are being undertaken separately from CR2 LEP+ as part of the ongoing Heavy Armour Automotive Improvement Programme (HAAIP), which is expected to continue until 2031 to align with the overall Challenger 3 programme. HAAIP has already led to upgrades to the air filtration system, through the use of cleanable air filters with increased operating life, which were tested in Exercise Saif Sareea 3 in October 2018. The HAAIP programme, awarded to BAE Systems, will also apply a common engine (CV12-8A) and suspension standard (3rd generation Hydrogas) to Challenger 2, the DTT, CRARRV, Titan and Trojan, improving reliability. With regard to the powertrain, BAE Systems were evaluating whether to uprate the existing CV12 engine or swap this for alternative designs. The proposed CV12 upgrade by Caterpillar Defense would fit electronically controlled common rail fuel injection and introduce engine health monitoring (HUMS). This would increase the maximum power output from 1,200bhp to 1,500bhp, reduce battlefield smoke emissions, and improve fleet reliability and availability. Since this information was released (2019) it has been announced that the engines will be updated to the CV12-8A specification, and no further information in the public domain has been released regarding introducing Common Rail Fuel Injection and HUMS. The engines and transmission units have themselves also been remanufactured in recent years. Work to update the base Challenger 2 hull and automotive components, undertaken by DE&S, RBSL and Babcock, commenced in July 2021 in advance of these being converted to Challenger 3s. Equipment replaced during HAAIP will be checked for serviceability, repaired if required, and returned for re-use in the existing Challenger 2 fleet. The hulls will also undergo ultrasonic testing, weld repairs and repainting. The overall scope of HAAIP as of 2021 includes: CV12-6A engines converted to CV12-8A specification Third Generation Hydrogas Suspension New Hydraulic Track Tensioners (HTT) with in line accumulators Improved Electric Cold Start System (Intake Manifold Heater) New Main Engine Air Intake Filters Improved Main Engine/Transmission Cooling; fitting new high efficiency radiators (596 sets) and fans (294 triple fan sets with mountings and drive systems). These new more modern assemblies will increase cooling capacity and reduce engine fuel cut back mode through improved air flow efficiency.The contract for the new cooling fans has been awarded to AMETEK Airtechnology Group, the suppliers of the current design. Other in-service upgrades On 15 December 2017, BAE Systems was awarded a contract to maintain the Challenger 2's thermal imaging system as part of a £15.4 million interim solution separate to the LEP. In October 2019, it was announced that Thales would be supplying their Catherine Megapixel (MP) thermal imaging camera. Variants Titan The Titan armoured bridge layer is based on aspects of the Challenger 2 running gear and replaced the Chieftain Armoured Vehicle Launched Bridge (ChAVLB). The Titan came into service in 2006 with the Royal Engineers, with 33 in service. Titan can carry a single 26-metre-long bridge or two 12-metre-long bridges. It can also be fitted with a bulldozer blade. Trojan The Trojan Armoured Vehicle Royal Engineers is a combat engineering vehicle designed as a replacement for the Chieftain AVRE (ChAVRE). It uses the Challenger 2 chassis, and carries an articulated excavator arm, a dozer blade, and attachment rails for fascines. Entering service in 2007, 33 were produced. Challenger 2E The Challenger 2E is an export version of the tank. It has a new integrated weapon control and battlefield management system, which includes a gyrostabilised panoramic SAGEM MVS 580-day/thermal sight for the commander and SAGEM SAVAN 15 gyrostabilised day/thermal sight for the gunner, both with eyesafe laser rangefinder. This allows hunter/killer operations with a common engagement sequence. An optional servo-controlled overhead weapons platform can be slaved to the commander's sight to allow operation independent from the turret. The power pack has been replaced by a new EuroPowerPack with a transversely mounted MTU MT883 diesel engine coupled to Renk HSWL 295TM automatic transmission. The increase in both vehicle performance and durability is significant. The smaller volume but more powerful Europowerpack power pack additionally incorporates as standard a cooling system and air-intake filtration system proved in desert use. The free space in the hull is available for ammunition stowage or for fuel, increasing the vehicle's range to . This powerpack was previously installed on the French Leclerc tanks delivered to the UAE as well as the recovery tank version of the Leclerc in service with the French Army. Further developed versions of the Europowerpack have more recently been installed in the latest serial produced Korean K2 Black Panther tank as well as the new Turkish ALTAY tank. BAES announced in 2005 that development and export marketing of 2E would stop. This has been linked by the media to the failure of the 2E to be selected for the Hellenic Army in 2002, a competition won by the Leopard 2. CRARRV The Challenger Armoured Repair and Recovery Vehicle (CRARRV) is an armoured recovery vehicle based on the Challenger 1 hull (with the updated Challenger 2 powertrain) and designed to repair and recover damaged tanks on the battlefield. It has five seats but usually carries a crew of three soldiers from the Royal Electrical and Mechanical Engineers (REME), of the recovery mechanic and vehicle mechanic/technician trades. There is room in the cabin for two further passengers (e.g. crew members of the casualty vehicle) on a temporary basis. The size and performance are similar to the MBT, but instead of armament it is fitted with: A main winch with 50 tonnes-force pull in a 1:1 configuration or 98 tonnes-force pull using an included pulley in a 2:1 configuration and anchor point on the vehicle, plus a small auxiliary winch to aid in deploying the main winch rope. Atlas crane capable of lifting at a distance of (this is sufficient to lift a Challenger 2 power pack). In order to improve flexibility and supplement the transportation of power packs around the battlefield, the British Army procured a quantity of dedicated CRARRV High Mobility Trailers (CRARRV HMT). Each CRARRV HMT enables a CRARRV to transport a single (Challenger, Titan or Trojan) power pack or two Warrior power packs, by altering the configuration of dedicated fixtures and attachment of fittings. Dozer blade to use as an earth anchor/stabiliser, or in obstacle clearance and fire position preparation. Large set of recovery and heavy repair tools including a man-portable ultrathermic cutting system with an underwater cutting capability and a man-portable welder. The design prototype is on display at The REME Museum at MoD Lyneham, Wiltshire. Operators : British Army – 408 delivered (227 in service as of 2016, 59 used for training and remainder held in storage) : Royal Army of Oman – 38 delivered Accidents On 14 June 2017, a Challenger 2 from The Royal Tank Regiment suffered an ammunition explosion during live firing exercises at the Castlemartin Range in Pembrokeshire. The tank was firing 120 mm practice shells with a standard propellant charge. The explosion critically injured the four-man crew, with two later dying of their wounds in hospital. The incident resulted in all British Army tank firing exercises being suspended for 48 hours while the cause of the explosion was investigated. It was later determined that a bolt vent axial (BVA) seal assembly had been removed during an earlier exercise and had not been replaced at the time of the incident, allowing explosive gases to enter the turret space; the lack of a written process for removal and replacement of the seal assembly meant that the crew at the time of the incident were unaware of its absence, and it was also noted that inadequate consideration had been given during the production of the L30 gun as to whether it could be fired without the seal assembly. A second explosion that occurred during the incident was attributed to the detonation of bag charges that had not been stowed in the internal ammunition bins, as required by correct procedure. Replacement Following Britain's exit from the European Union, early in 2021 the UK entered talks to be allowed into the European Main Battle Tank project as an observer. This may have a bearing on a future replacement of the Challenger 2. Classified specifications leak In July 2021, excerpts of the tank's 'Army Equipment Support Publication' (i.e. user manual), containing technical specifications of the vehicle, were posted on the official forums of the war simulation game War Thunder; the poster, allegedly a Challenger 2 tank commander, claimed to have done so in the hope that developer Gaijin Entertainment would modify the performance of the in-game tank to match the specifications detailed in the document. While the uploaded version of the AESP document was edited to appear as though it had been declassified under the UK's Freedom of Information Act 2000, Gaijin Entertainment stated that the MoD provided confirmation the information was in fact still classified, and that disseminating the tank's specifications would be a violation of the Official Secrets Act. Due to these possible legal penalties, Gaijin will not handle the information or incorporate it into their game. See also List of main battle tanks by generation Notes References External links British army Challenger 2 webpage Main battle tanks of the United Kingdom Post–Cold War tanks of the United Kingdom Post–Cold War main battle tanks Military vehicles introduced in the 1990s
16481236
https://en.wikipedia.org/wiki/Starmad
Starmad
STARMAD (space tool for advanced and rapid mission analysis and design) deals with the latest trend in the space industry is towards space missions, spacecraft, systems and products, which require quick solutions for system design and software development. Fundamental aspects are: the capability of minimising the number of steps to perform a complete Space Mission Analysis and Design; the ability to evaluate and display results instantaneously; the possibility to control all complex Space Mission subjects in a concurrent manner. STARMAD aims to achieve cost reduction and quality improvements by streamlining the design process through improving engineer involvement, and hence his understanding and efficiency in designing a space mission. Definition STARMAD is a Space Mission Analysis and Design tool, intended to enable users to quickly and easily perform the following tasks: 1. Preliminary Orbit Analysis, in terms of Dynamics, Geometry, Manoeuvre and Maintenance, Interplanetary Transfer, and Delta-V Budget. 2. Observation Payload Analysis, in terms of Electromagnetic Spectrum, Optics and Sizing. 3. Spacecraft Subsystems Design, considering Attitude Control, Communications, Power System, Propulsion System, Structural Analysis and Thermal Control. 4. Launch and Transfer Vehicle Information. 5. Mission Operation Complexity, from the point of view of Mission Design and Planning, Flight System Design, Operational Risk Avoidance, Ground Systems. Its main features are: Possibility to perform all, or only a subset of, the tasks listed above. Easy to use through the graphical user interface. Capability to concurrently analyse Space Mission aspects. Configuring STARMAD with an existing space mission and satellite, it is possible to check the effects that any modifications have on a Mission. Export data and possibility to generate a full Space Mission Report. Problem modelling STARMAD is a tool allowing user to perform a Space Mission Analysis and Design in a complete, simple and fast way. It can be compared to an electronic handbook where you have just to insert the required inputs, press enter and see the results. The user does not need a large quantity of literature to analyse a space mission subject. Based on the task, STARMAD uses suitable formulas to find the solution. Starting from the requirements, the engineer can carry out fundamental Space Mission Analyses, not only in terms of engineering parameters but also in terms of Mission Operations Complexity. In addition, configuring STARMAD with an existing space mission and satellite, it is possible to test critical applied modifications. Furthermore, it offers the possibility to work in a concurrent as well as in an independent way. The System Algorithms technique is used to compute system performance. It applies the basic physical or geometric formulas associated with a particular system or process, such as those for determining resolution, size of an antenna, link budget, geometric coverage. System Algorithms provide the best method for computing performance, providing clear traceability and establishing the relationship between design parameters and performance characteristics. STARMAD computations are based on System Algorithms technique, additionally implementing all the design parameters interdependencies and automatically exchanging results throughout its sections. This allows to simplify and streamline the overall design process. This method is powerful, showing how performance varies with key parameters. Limitation is the assumption of having correctly identified what limits system performance, such as optical quality, pointing stability, etc. Although this limitation, System Algorithms technique is ideal for preliminary assessment on space missions. Concurrent approach In automatically linking all Space Mission aspects with their associated interdependencies, STARMAD is able to simplify an otherwise complex problem. It facilitates a fast and effective interaction of all disciplines involved, ensuring consistent, high-quality results. The software is an efficient working tool to ensure consistent end-to-end design of a space mission. The concurrent approach will improve engineer involvement, and hence his/her understanding and efficiency in designing a space mission. The spacecraft design process is based on mathematical models, which are implemented inside STARMAD. By this means, a consistent set of design parameters can be automatically defined and exchanged throughout the software sections and subsections. And any change, which may affect other disciplines, can immediately be identified and assessed. In this way, a number of design iterations can be performed, and different design options can easily be analysed and compared. In such a way, via STARMAD, it will be possible to streamline the design process achieving cost reduction and quality improvements. STARMAD software structure STARMAD is principally divided into five primary sections, each of which contain several subsections. Through the Graphical User Interface (GUI), the user can define the type of problem. The Main User Interface is composed of 30 subsections (called pressing relative buttons). Each subsection can be configured with the required inputs and run independently from the others. Going back to the main GUI, the user can solve several different concurrent space mission tasks calling other subsections and performing analyses in parallel taking under control the complexity of the problem. All the involved sections will take care of the performed evaluations and will automatically set their inputs based on the obtained results. This process will allow the user to concurrently design and analyse space mission subjects. Every subsection has its own Output Section showing results, data and design summary when the simulation is performed. All results can be saved, stored, or re-loaded for modifications. The content of each of the five primary sections is described below. Orbit analysis It is composed of the following sub-divisions: Orbit Dynamics evaluating basic spacecraft dynamics, orbit perturbations both atmospheric and gravitational; Orbit Geometry, where general coverage characteristics and target viewing are calculated; Orbit Manoeuvres and Maintenance for circular orbit. In this section, setting main input parameters, such as parking and operational orbit parameters, or re-phasing, de-orbit and end-of-life parameters, Orbit Manoeuvre outputs (orbit dynamics, Hohmann transfer, plane change, low-thrust spiral change) and Orbit Maintenance outputs (Dynamics, Atmospheric and Gravitational effects, Re-phasing, End-of-life Manoeuvre parameters) are evaluated; Interplanetary Orbit Transfer, where the main output parameters for an interplanetary transfer are evaluated under the hypothesis of patched conic approximation, such as velocity, energy, time of flight, delta-V to initiate and complete the transfer. For Earth departure, circular orbit is assumed. Heliocentric transfers are also considered; Delta-V and Geometry Budgets, calculating all the elements of the delta-V budget and Mapping and Pointing errors. Observation payload analysis It is subdivided in the sections: Electromagnetic Spectrum & Optics, where typical EM Spectrum and Optics parameters are determined, such as irradiance, emittance, swath-width, ground resolution. Observation Payload Sizing, where sizing of the observation instrument in terms of dimensions, mass and power is evaluated, as well as the payload data rate. Spacecraft subsystems design It is composed of all main subsystems required to build a satellite. Spacecraft preliminary sizing, in terms of mass, power, volume, area and moments of inertia of S/C body and solar array. Attitude control, composed of two sections: - Torque estimates: where orbit characteristics, environmental torques and slew characteristics are calculated. - Attitude control sizing: evaluating main parameters of Momentum wheel, Reaction wheel, Thrusters and Magnetic Torquer. Communications: - Uplink: setting ground transmitter parameters, this section evaluates outputs for Ground and spacecraft transmitter, Geometry and Atmosphere perturbations, Link budget in terms of EIRP, space loss, atmospheric attenuation, rain attenuation, G/T, Antenna pointing losses, Eb/No, C/No, Margin. - Downlink: setting the Spacecraft Transmitter parameters, this section evaluates outputs for Ground and Spacecraft transmitter, Geometry and Atmosphere perturbations, Link budget in terms of EIRP, space loss, atmospheric attenuation, rain attenuation, G/T, Antenna pointing losses, Eb/No, C/No, Margin. Power subsystem sizing, which is subdivided into 3 sections: - Solar array; - Secondary battery; - Other primary sources to calculate solar array mass and power budgets, battery capacity and mass, power and mass for fuel cells, solar thermal dynamics, radioisotope, nuclear reactor (if on board). Propulsion subsystem, composed of the following sections: - Sizing, which principally calculates mass, power, mass flow rate, thrust for both chemical and electric propulsion system; - Thermodynamics, evaluating specific impulses, combustion chamber and nozzle characteristics. Additionally, it performs a complete sizing for the liquid propulsion system; - Storage and Feed, where oxidiser and fuel characteristics plus bulk density and volume are determined. Structural Analysis for: - Monocoque Structure; - Semi-Monocoque Structure to calculate loads, axial and lateral deflections, stress, bending moment, margins of safety. Thermal control to perform analyses on the spacecraft body and solar array. Main parameters evaluated are: solar and albedo energy absorbed, maximum and minimum equilibrium temperature, maximum and minimum planet IR energy absorbed, possible changes in S/C to reduce maximum equilibrium temperature to specified upper limit, heater requirements during eclipse. System Sizing Summary resuming the main feature of the designed spacecraft. Vehicle information This software section gives an overview of the principal characteristics for existing launch and transfer vehicles. Possibility to implement a user-defined vehicle is incorporated in both sections (launchers and transfers). For Launchers: spacecraft loaded mass, performance (mass to orbit, available inclinations, injection accuracy, flight rate), reliability experience, payload compartment characteristics, frequency, accelerations and price are summarised. For Transfer Vehicles: delta-V capability, thrust, mass flow rate, specific impulse, burn time, and mass characteristics are computed. Mission operation complexity This section summarises the results of the Mission Operations Complexity investigation using NASA’s JPL model as described in the text “Cost Effective Space Mission Operations”. It is divided into four subsections as follows: Mission Design and Planning: Science and Engineering, GNC and Tracking events are monitored in terms of frequency, criticality, data return, planning; Flight System Design: the operations complexity events are related to command, monitor, pointing, automation, flight margins; Operational Risk Avoidance: command and control, data return, performance analysis, fault recovery are taken into account; Ground Systems: interfaces and ground system complexity, design of the ground system, organisation and staffing, automation, are all considered in the evaluation of the mission operations complexity. In each of these subsections, a level of complexity (high, medium, or low) is given for all the events related to the particular mission subject. The level of the total mission complexity and the predicted full-time equivalent manpower for efficient operations is given as a final output. Mobile version Smartphones and tablets usage are widely growing in all the technological fields, helping users in performing the most different activities just ‘on the go’. The increasing computational power of such devices can actually allow also to perform complex analysis and therefore, they can be conceptually used in Space industry in helping users to build space missions, spacecraft, systems and products requiring quick solutions for system design and software development point of view. "iStarmad" is the iDevice extension of STARMAD, and it is an app to perform preliminary end-to-end space mission analysis and spacecraft subsystems design using iPhone/iPad devices. History In 2007, STARMAD was created by Davide Starnone. For the following years, it was promoted all around the world through several International Conferences, such as the "IAC". Then the software has been sold around the world for 7 years. In 2014 "SSBV", a Dutch-led technology driven company active in the domains of (aero)Space and Defence & Security, acquired STARMAD that now is officially included in their portfolio. References External links STARMAD website STARMAD Brochure Aerospace engineering software Systems engineering Astronomy software Mathematical software Physics software Space science Spaceflight Business software for Windows
681244
https://en.wikipedia.org/wiki/LinuxTag
LinuxTag
LinuxTag (the name is a compound with the German Tag meaning assembly, conference or meeting) is a free software exposition with an emphasis on Linux (but also BSD), held annually in Germany. LinuxTag claims to be Europe's largest exhibition for "open source software" and aims to provide a comprehensive overview of the Linux and free software market, and to promote contacts between users and developers. LinuxTag is one of the world's most important events of this kind. LinuxTag's slogan, "Where .COM meets .ORG", refers to its stated aim of bringing together commercial and non-commercial groups in the IT sector. Each year's event also has its own subtitle. Promotion of free software LinuxTag sees itself as part of the free software movement, and promotes this community in an extraordinary degree by supporting numerous open source projects. LinuxTag offers a way for these projects to promote their software and their concepts, and thus present themselves to the public in an appropriate manner, with their own booths, forums and lectures. The goal is to encourage projects to share concepts and content to the benefit of other groups and companies, and to provide forums for in-depth discussions of new technologies and opportunities. LinuxTag e.V. The non-profit association "LinuxTag e.V." was founded in preparation for LinuxTag's move from Kaiserslautern to the University of Stuttgart in 2000. The association plans and organizes the LinuxTag event by volunteer work and guides its ideological development. The association LinuxTag e.V. is registered in Association Register VR 2239 of the Kaiserslautern District Court. The association manages the LinuxTag name and word mark. The purpose of the association, according to its bylaws, is "the promotion of Free Software", and is pursued through the organization of the LinuxTag events. The association is represented by a three-person executive board, supplemented by several representatives with delegated authority. All members of the LinuxTag association are volunteers and receive no remuneration for their service. (In 2005, the First Chairman and CFO were employed and remunerated by the association from 1 April until 31 July.) Surpluses resulting from the LinuxTag events or from sponsorship are reinvested in the association's non-profit activities. History LinuxTag was launched in 1996 by a handful of active members of the Unix Working Group (Unix-AG) at the University of Kaiserslautern. They wanted to inform the public about the young technology of Linux and Open Source software. The first LinuxTag event only drew a small number of participants. Since then, however, the event has changed venues several times to keep pace with rapidly growing numbers of exhibitors and visitors. Kaiserslautern The first LinuxTag conference and exhibition was held at the University of Technology in Kaiserslautern. LinuxTag 1996 - 1999 The first LinuxTag was a theme night on Linux. In 1998, LinuxTag drew 3,000 visitors. In 1999 the event was nationally announced, and drew some 7,000 visitors. It was the first time LinuxTag filled a whole building, and the last time it was held in Kaiserslautern. In the aftermath of the event, the LinuxTag association was founded. Stuttgart In 2000 and 2001 LinuxTag was held in Stuttgart. LinuxTag 2000 LinuxTag 2000 was held at the Stuttgart exhibition center from 29 June to 2 July, and received up to 17,000 visitors. The conference included a business track for the first time, devoted to such topics as IT security, legal aspects of free software, and potential uses of Linux and the open-source concept in commercial applications. IT decision-makers were shown case studies in applications of free software. LinuxTag 2001 LinuxTag 2001 took place at Stuttgart exhibition center from 5 to 8 July, with 14,870 visitors. The event was held under the patronage of the German Ministry of the Economy. Keynote speakers were Eric S. Raymond, Rob "Cmdr Taco" Malda from Slashdot and John "Maddog" Hall of Linux International. Karlsruhe From 2002 to 2005, the LinuxTag conference and exhibition was held in Karlsruhe. LinuxTag 2002 LinuxTag was held in 2002 for the first time from 06.06.2002 to 09.02.2002 in the Karlsruhe Convention Centre. There were about 13,000 visitors. The motto of the conference was "Open your mind, open your heart, open your source!". About 100 exhibitors were at the exhibition. LinuxTag 2003 LinuxTag 2003 was titled Open Horizons and was held from 10 to 13 July for the second time in Karlsruhe in 2003. Together with the admission ticket for 10 euros, visitors received the LinuxTag for the first time available Knoppix DVD and a Tux Pin. With 19,500 visitors, the number of visitors increased by 40 percent over the previous year. As an exhibitor, both businesses and non-profit groups were represented. Apple showed Mac OS X in conjunction with open source. Around 150 exhibitors were there in 2003. Other highlights included the release of OpenGroupware.org on the model of OpenOffice.org as an open source and free conversion of several dozen Xbox s, some with hardware modification by two solder points, partially by importing the so-called MechInstallers on Linux. There was also a Programming Competition, and on Sunday a world record attempt took 13 to 14 clock instead: On a server 100 Linux desktop sessions should with Gnome and KDE are simultaneously running. Here, everyone could join in on the Internet. The result was not disclosed. Besides the exhibition also congresses were held in which renowned experts spoke on a subject group. There was, for example, a Debian conference and on Sunday there was a lecture on TCPA, followed by discussion. The Business and Administration Congress has enlarged with 400 participants by about 60%. The free lecture program was opened on Friday by the Parliamentary State Secretary, Federal Ministry of Economics and Labour (BMWA), Rezzo Schlauch. Keanote speaker were Jon "Maddog" Hall of Linux International, Georg Greve of Free Software Foundation Europe and Matthias Kalle Dalheimer of Klarälvdalens Datakonsult AB. With Webcams you could also take a virtual visit to the fair. The Pingu-Cam (Tux of Penguin is the Linux mascot) showed pictures from the zoo of Karlsruhe, which is located right next to the fairgrounds. LinuxTag 2004 LinuxTag 2004 took place from 23 to 26 June for the third time in the congress center in Karlsruhe in 2004. Those who filed on the homepage, got in free. For 10 euro entry but you got a Tuxpin, a Knoppix DVD and a DVD with FreeBSD, NetBSD and OpenBSD. LinuxTag 2004 had the slogan "Free source - free world". 16,175 visitors were counted. With the record number of about 170 exhibitors, among many freelance projects also numerous large and medium-sized enterprises were there. Hewlett-Packard was the third time Official Cornerstone Partner. Other major companies were the C & L Verlag, Intel, Novell, Oracle, SAP and Sun Microsystems. For the first time Microsoft was represented with a booth. The one-day Business and Administration Congress on 24 June were presented in business and government case studies and success stories about the use of open source software. Among other things, also the problem with Virus and worms was an issue. For the free conference, there was a record turnout with about 350 proposals from 20 countries. 130 thereof could be accommodated in the program. The issue of software patent was an important issue. For the first time the LPI 101 certification was possible at the LinuxTag. Contests on this LinuxTag were a coding marathon and Hacking Contest. LinuxTag 2005 LinuxTag 2005 took place in the period from 22 to 25 June held in the Congress Center Karlsruhe. LinuxTag 2005 was the 11th LinuxTag and titled "Linux everywhere". In addition to the exhibition of different companies that have to do with Linux, more or less, there was once again in 2005, a presentation program. Also, the Business and Administration Congress took place on 22 June 2005 again. During the LinuxTag was the possibility to participate at various tutorials. Jimmy Wales announced in his opening speech, the collaboration between Wikipedia and KDE. Through a web interface each program can directly access Wikipedia. The KDE Media Player Amarok can access the Wikipedia article of artists starting with version 1.3. The organizer spoke of 12,000 visitors, which was mainly caused by the newly designed entrance fees and the hottest week of the year. Wiesbaden 2006 was the LinuxTag in Wiesbaden. LinuxTag 2006 LinuxTag 2006 took place from 3 to 6 May 2006 in the Rhein-Main-Hallen in Wiesbaden under the theme „See, what's ahead“ . According to organizers, over 9,000 people from over 30 nations attended LinuxTag 2006. There were many, often international lectures and various information booths, present among others were IBM, Avira and Sun Microsystems, but some others such as Hewlett-Packard or Red Hat were missing. Three teams participated the hacking Contest. The most visited lecture of the year was the Keynote of Ubuntu founder Mark Shuttleworth., The "chief dreamer of Ubuntu" referred to himself and the good cooperation of users with developers. He stressed that Kubuntu and Ubuntu should be treated equally and that there is a good cooperation between the developers. There was also the opportunity to pursue some of the lectures on a video stream, which was used by an estimated 1,800 people. Berlin Since 2007, the LinuxTag in Berlin in the Exhibition halls held under the Berlin Radio Tower. LinuxTag 2007 LinuxTag 2007 took place from 30 May to 2 June 2007 with the slogan "Come in: We're open" instead. It was attended by about 9600 people. The event was held under the auspices of Interior Minister Wolfgang Schäuble. This sparked a result of the political attitude of the Minister of the Interior a lively discussion shaft from which to mushroom to Boykottierungsaufrufen LinuxTag. The uproar in the Linux community was so great that self-reported foreign pages about it. LinuxTag 2008 LinuxTag 2008 took place from 28 to 31 May at the Berlin Exhibition Grounds with 11,612 visitors. The second LinuxTag in the German capital was under the patronage of German Foreign Minister and Vice Chancellor Frank-Walter Steinmeier and is part of a six-day "IT Week in the Capital Region", which includes the fourth time in Berlin held IT business trade fair IT Profits under the patronage of the Federal Minister of Transport. At the same time the second German Asterisk day, the user and developer conference to Voice over IP, and the 8th @ Kit Congress to discuss legal issues of professional IT use. Important topics were "Highlights of digital lifestyle" and the "Mobile + Embedded Area". LinuxTag 2009 LinuxTag 2009 took place from 24 to 27 June on the Berlin Exhibition Grounds. He had more than 10,000 visitors. He is under the patronage of German Foreign Minister and Vice Chancellor Frank-Walter Steinmeier. The new president of Free Software Foundation Europe Karsten Gerloff attended LinuxTag. A focal point is the "image of business processes using Linux" and "Open Source in the colors of the tricolor," for which 14 open source suppliers from France showing their product and service range. LinuxTag 2010 LinuxTag 2010 was the 16th LinuxTag and took place from 9 to 12 June 2010 at the Berlin Exhibition Grounds. It was attended by about 11,600 people. He is under the patronage of Cornelia Rogall-Grothe, Federal Government Commissioner for Information Technology. Keynote speakers included Microsoft's general manager James Utzschneider who stunned the audience with his open approach to open source, the CEO of SugarCRM, Larry Augustin underlined the economic impact of OSS and the connection with the upcoming trend cloud computing, the open source Chef Google, Chris DiBona, underlined the high professional level of the Congress, the kernel developer Jonathan Corbet gave an outlook on the next Linux kernel 2.6.35 and Ubuntu founder Mark Shuttleworth staked out the milestones for Ubuntu desktops. LinuxTag 2011 The 17th LinuxTag was held from 11 to 14 May 2011 on the Berlin Exhibition Grounds with the slogan "Where .com meets .org". It was attended by 11,578 visitors and was under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology. Keynotes were given by Wim Coekaerts (Oracle), Bradley Kuhn (Software Freedom Conservancy) and Daniel Walsh (Red Hat). LinuxTag 2012 The 18th LinuxTag was held from 23 to 26 May 2012 on the Berlin Exhibition Grounds with the motto "Open minds create effective solutions". He is under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology. At LinuxTag the premiere of "Open MindManager Morning" took place, discussed on the industry experts and educators across the IT and changes in society, and philosophizing. In addition, also found the premiere of the new lecture series "Open Minds Economy", organized by the Open Source Business Alliance and Messe Berlin, instead, presents the successful model of open source software in the fields of economy and society. Keynotes were given by Jimmy Schulz, chairman of the project group "Interoperability, standards and open source" the Commission of Inquiry to Internet and digital society in the German Bundestag, Ulrich Drepper, maintainer of the GNU C standard library Glibc and Lars Knoll, employees at Nokia and Chief maintainer of the QT library. LinuxTag 2013 The 19 LinuxTag was held from 22 to 25 May 2013 embedded on the Berlin Exhibition Grounds with the motto "In the cloud - triumph of the free software goes on". He is under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology. It was held the premiere of Open IT Summit, which was organized as a parallel conference on LinuxTag of the Open Source Business Alliance (OSBA) and Messe Berlin with the goal to discuss the topic open source in the business environment. Also the OpenStack Day took place in cooperation with the OpenStack Foundation as a first major subconference on the subject OpenStack in Europe. The Foundation has its headquarters in the U.S. and sees itself as a global reservoir of the same name, scalable cloud management platform. Keynotes were given by kernel developer Matthew Garrett on the subject of Unified Extensible Firmware Interface (UEFI) and Secure Boot and Benjamin Mako Hill, a researcher at the Massachusetts Institute of Technology, who called so-called anti Features unacceptable, where manufacturers to build restrictions into devices. LinuxTag 2014 In recent years, the number of visitors of LinuxTag stagnated despite more and more users of open-source programs. This decline in visitor numbers was interpreted as a side effect of the high market penetration of free software and Linux. In addition, for some years, there were many similar regional events, which have drawn from the concept of LinuxTag. In order to adapt to the changes, the Linuxtag focused on the core issue of the professional use of open source software in 2014. Therefore, LinuxTag started a strategic partnership with the droidcon. The 20th LinuxTag took place between 8 and 10 May 2014 in the STATION Berlin. In spatial and temporal proximity were the Media Convention Berlin (May 6 to 7), the re: publica (6 to 8 May) and the droidcon (8 to 10 May 2014). All events aimed towards a close relationship in order to achieve a high level of appreciation of the combined effort. External links Official LinuxTag website German Official web page (with information in English as well) The Knoppix project OpenMusic, another LinuxTag-sponsored project. References Free-software events Linux conferences Recurring events established in 1996
59085063
https://en.wikipedia.org/wiki/Screenlife
Screenlife
Screenlife or computer screen film is a film format known as visual storytelling where all the movie events occur on the computer, tablet or smartphone screen. It became popular in the 2010s with the growing impact of the Internet. According to Timur Bekmambetov, the Russian director and producer, a computer screen film should take place on one specific screen, never move outside of the screen, the camerawork should resemble the behavior of the device's camera, all the action should take place in real time, without any visible transitions and all the sounds should originate from the computer. After producing one of the first mainstream feature-length computer screen films, Unfriended, in 2014, Bekmambetov inaugurated screenlife. Features Screenlife video displays only a desktop of a computer or smartphone and actions of the main character on this device: viewing files, surfing the Internet, ZOOM or Skype calls, texting in messengers. Screenlife movies are made using a computer or smartphone screen recording technologies and GoPro cameras (and other portable chambers), similar to device cameras. Screenlife is not the film genre, because screenlife movies can be made in different genres: horror, thriller, comedy, etc. It is mostly known as a new storytelling format because the computer or smartphone screen is used in journalism and advertising as a visual source. History Cinematographers first used the screenlife format in 2010 after the new era of computers and the Internet. Screenlife is based on the immimissive cinema and pseudo-documentary of found footage formats (The Blair Witch Project) and mockumentary (Paranormal). However, the first trials of a combination of a classic film format and demonstration of desktops with interfaces were made in the 2000s. For example, the horror movie The Collingswood Story shows everything through the web cameras of the main characters. Some elements of the screenlife were recognised in the Night Watch and Day Watch movies by Timur Bekmambetov. Russian director and producer Timur Bekmambetov made the first full-length screenlife film Unfriended in 2014. Unfriended earned $64 million at the box office while the budget was $1 million. The most successful screenlife movie of Bekmambetov is the thriller Searching. The main roles were performed by American actors John Cho and Debra Messing, and Aneesh Chaganty was the director. The film received an Audience Choice Award at the Sundance Film Festival and collected in world box office over $75 million with a budget of about $700,000. In 2018, Bekmambetov first was the director of the screenlife film Profile (in all previous projects, he performed as a producer). Profile is a political thriller about the online recruitment of a British journalist by an Islamic terrorist. The film received the Audience Choice Award in the Panorama program of the Berlin Film Festival and the SXSW Festival in the United States. In 2019, the first TV series about the zombie apocalypse called Dead Of Night in screenlife format was released. The movie was produced by Timur Bekmambetov. It was available to view on smartphones in the Snapchat application. In 2020, the second season was released. In June 2020, Timur Bekmambetov signed an agreement with Universal Pictures for the production of five films in screenlife format. In October 2020, the media reported that Bekmambetov was producing a new blockbuster in the screenlife format, starring American actors Eva Longoria and Ice Cube. In 2021, Timur Bekmambetov and Igor Tsai presented their new screenlife project R#J at the Sundance Film Festival. It was an experimental romantic drama that adapts the love story of Romeo and Juliet to the modern world. R#J was also presented at the SXSW Film Festival, where it won an Adobe Editing Award. In March 2021, Timur Bekmambetov's Bazelevs studio was included in the list of the most innovative companies in the world according to the American edition of Fast Company for the use of shooting technologies in the screenlife format. In 2021, SXSW also presented a vertical miniseries iBible: Swipe Righteous as a modern retelling of Bible stories on a smartphone screen. In March 2021, the media reported on the filming of the screenlife comedy #fbf with Ashley Judd. In June 2021, the media reported about the filming of the new Hollywood screenlife thriller Resurrected (directed by Egor Baranov) with Dave Davis (Dybbuk) in the leading role. The action of the film will occur in the near future, in which the Vatican has learned to resurrect people. Russia The first full-length screenlife film in Russian was Roman Karimov's thrash comedy Dnyukha! (Russian: Днюха!), which was released in 2018. All the actions take place on the screens of gadgets. In December 2020, an interactive comedy in the screenlife format The Player (Russian: Игрок) was released on an interactive platform where viewers could play alongside the main character. In September 2021, the first Russian screenlife thriller #Blue_Whale (Russian: #ХОЧУВИГРУ), directed by Anna Zaitseva, will be released. The film is dedicated to a series of mysterious teenage deaths in a provincial city. One of the first Russian screenlife TV series was the project of Light from the Other World (Russian: Света с того света), filmed by Maxim Pezhemsky. During the pandemic, about 10 TV series filmed in the Zoom-conference format were released in Russia. In 2018, the first documentary project in the screenlife format 1968.DIGITAL was released in Russia. It was a vertical web series produced by Mikhail Zygar, Karen Shahinyan and Timur Bekmambetov. This is the story of a real hero from 1968, told through a smartphone screen that he could have. Bekmambetov and Zygar adapted the project into English and released it in the United States together with BuzzFeed with the title Future History:1968. Other countries At the end of 2020, Bekmambetov signed an agreement with the co-founder of the Graphic India film company Sharad Devarajan and the CEO of the Indian media company Reliance Entertainment to create original Indian films in screenlife format. Format In the screenlife format, the film set is the desktop of the computer, and the files, folders and screen wallpapers are the decorations. The movement of the cursor is important because the viewer's attention is concentrated on it. The main difference between the post-production of traditional and screenlife films is the time required for editing. On average, editing screenlife movies takes 6–9 months. The post-production time is compensated for by a shorter production period compared to the traditional cinema (for example, Searching was filmed in 13 days). Screencasting software is usually used to decorate the device screen, and a GoPro camera is used for shooting. Actors often need to be the cameraman to bring life to the film. Examples Feature films The Collingswood Story (2005) Skydiver (2010) Megan Is Missing (2011) V/H/S (2012) The Den (2013) Unfriended (2014) Unfriended: Dark Web (2018), sequel to Unfriended Open Windows (2014) Ratter (2015) Face 2 Face (2016) Sickhouse (2016) Searching (2018) Profile (2018) Host (2020) C U Soon (2020) Spree (2020) Safer At Home (2021) Untitled Horror Movie (2021) Insiders (TBA) R#J (2021) #Blue_Whale (2021) Short films TAUCHER (alternative title: David (41) and Karla (38)), a segment of Abschiede (2009) Internet Story (2011) The Sick Thing That Happened to Emily When She Was Younger, a segment of V/H/S (2012) Noah (2013) 2088 (2021) Love in Isolation (2021) Filtered (2021) AzulScuro (2021) Documentaries Transformers: The Premake (2014) Albert Figurt Desktop Horror / a video essay (2016) Future History: 1968 (2018) Chloé Galibert-Lainé Watching the pain of the others (2018) Gabrielle Stemmer Clean with me (After Dark) (2019) The Invention of Chris Marker (2020) iBible: Swipe Righteous (2021) Web series United States The Scene (2004) Web Therapy (2011) Dead of Night (2019-2020), 2 seasons Gameboys (2020) Russia Sveta from Another World (Russian: Света с того света (2018) Feat (Russian: Подвиг (2019) Nagiyev on quarantine (Russian: Нагиев на карантине (2020) #SittingAtHome (Russian: #СидЯдома (2020) Madness (Russian: Беезумие (2020) Together (Russian: Все вместе (2020) Safe Connections (Russian: Безопасные связи (2020) Isolation (Russian: Изоляция (2020) Picky Days (Russian: Окаянные дни (2020) #InMaskShow (Russian: #вмаскешоу (2020) Locked (Russian: Взаперти (2020) Alice (Russian: Алиса (2020) Sveta from Another World 2 (Russian: Света с того света 2 (2021) Quarantine Stories (Russian: Истории карантина (2021) Television Connection Lost (2015), episode 16 from the sixth season of Modern Family Awards 2014: Remove from Friends (Russian: Убрать из друзей) — Most Innovative Film According to Fantasia Film Festival 2018: Search (Russian: Поиск) — 2018 Sundance Film Festival Audience Award (NEXT Program) and Alfred Sloan Award 2018: Profile (Russian: Профиль) — Audience Award of the Berlin Film Festival (Panorama Program) 2018: Profile (Russian: Профиль) — SXSW Film Festival Audience Award 2020: R#J — SXSW Film Festival Special Adobe Editing Award See also Found footage Real time Screencast Films about computers References Cinematic techniques Film genres
14901231
https://en.wikipedia.org/wiki/1404%20Ajax
1404 Ajax
1404 Ajax is a carbonaceous Jupiter trojan from the Greek camp, approximately kilometers in diameter. It was discovered on 17 August 1936, by German astronomer Karl Reinmuth at Heidelberg Observatory in southern Germany, and named after the legendary warrior Ajax from Greek mythology. The assumed C-type asteroid belongs to the 40 largest Jupiter trojans and has a longer than average rotation period of 29.4 hours. Orbit and classification Ajax is a C-type asteroid, that orbits in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit (see Trojans in astronomy). It is also a non-family asteroid in the Jovian background population. Jupiter trojans are thought to have been captured into their orbits during or shortly after the early stages of the formation of the Solar System. More than 4,500 Jupiter trojans in the Greek camp and 7,000 in total have been discovered. It orbits the Sun at a distance of 4.7–5.9 AU once every 12 years and 3 months (4,459 days; semi-major axis of 5.3 AU). Its orbit has an eccentricity of 0.11 and an inclination of 18° with respect to the ecliptic. The body's observation arc begins at Heidelberg 6 days after its official discovery observations in August 1936. Physical characteristics Ajax is an assumed, carbonaceous C-type asteroid, while its V–I color index of 0.96 agrees with most D-type asteroids, which is the dominant spectral type among the large Jupiter trojans. Rotation period In December 2010, a rotational lightcurve of Ajax was obtained from photometric observations taken by Robert Stephens at the Goat Mountain Astronomical Research Station in California. Lightcurve analysis gave a rotation period of 29.38 hours with a brightness variation of 0.30 magnitude (), superseding fragmentary photometric measurements by Richard P. Binzel (1988), and by Roberto Crippa and Federico Manzini (2009) at the Sozzago Astronomical Station , which gave a period of 28.4 and 34 hours, respectively (). Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Ajax measures between 81.69 and 96.34 kilometers in diameter and its surface has an albedo between 0.048 and 0.0665. The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0508 and a diameter of 81.43 kilometers based on an absolute magnitude of 9.3. Naming This minor planet was named for Ajax the Great, a Greek warrior of great strength and courage in the Trojan War. He is the half brother of Teucer and son of king Telamon, who kills himself because Achilles armor was awarded to Odysseus. The Jupiter trojans , and and are all named after these figures from Greek mythology. The official naming of Ajax was first cited in The Names of the Minor Planets by Paul Herget in 1955 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001404 Discoveries by Karl Wilhelm Reinmuth Minor planets named from Greek mythology Named minor planets 19360817
3678414
https://en.wikipedia.org/wiki/EMBOSS
EMBOSS
EMBOSS is a free open source software analysis package developed for the needs of the molecular biology and bioinformatics user community. The software automatically copes with data in a variety of formats and even allows transparent retrieval of sequence data from the web. Also, as extensive libraries are provided with the package, it is a platform to allow other scientists to develop and release software in true open source spirit. EMBOSS also integrates a range of currently available packages and tools for sequence analysis into a seamless whole. EMBOSS is an acronym for European Molecular Biology Open Software Suite. The European part of the name hints at the wider scope. The core EMBOSS groups are collaborating with many other groups to develop the new applications that the users need. This was done from the beginning with EMBnet, the European Molecular Biology Network. EMBnet has many nodes worldwide most of which are national bioinformatics services. EMBnet has the programming expertise. In September 1998, the first workshop was held, when 30 people from EMBnet went to Hinxton to learn about EMBOSS and to discuss the way forward. The EMBOSS package contains a variety of applications for sequence alignment, rapid database searching with sequence patterns, protein motif identification (including domain analysis), and much more. The AJAX and NUCLEUS libraries are released under the GNU Library General Public Licence. EMBOSS applications are released under the GNU General Public Licence. EMBOSS application groups See also Open Bioinformatics Foundation Soaplab - A SOAP web service interface including EMBOSS Genostar - Integration of some of EMBOSS tools in a graphical application References External links Bioinformatics software Free science software Science and technology in Cambridgeshire South Cambridgeshire District
26405633
https://en.wikipedia.org/wiki/Monitor%20proofing
Monitor proofing
Monitor proofing or soft-proofing is a step in the prepress printing process. It uses specialized computer software and hardware to check the accuracy of text and images used for printed products. Monitor proofing differs from conventional forms of “hard-copy” or ink-on-paper color proofing in its use of a calibrated display(s) as the output device. Monitor proofing systems rely on calibration, profiling and color management to produce an accurate representation of how images will look when printed. While a “soft-proof” function has existed in desktop publishing applications for some time, commercial monitor proofing extends this capability to multiple users and multiple locations by specifying the hardware to be used, and by enforcing one set of calibration procedures and color management policies for all users of the system. This ensures that all viewers are calibrated to a known set of conditions, and given hardware of equal capabilities will therefore be viewing the same color on screen. System Components Monitor proofing systems consist of the following hardware and software components: Computer with calibration and profiling software Calibration and profiling software is often provided by, or bundled with the monitor proofing application by the software vendor. Color management support for ICC profiles created by the monitor proofing system is available through the operating system on most Windows, Macintosh and Linux computers. Graphics monitor High-quality monitors are a key enabling technology for monitor proofing systems. The International Organization for Standardization (ISO) finalized the standards for color proofing on displays in 2004 and since this publication date manufacturers including Apple, EIZO and NEC have produced LCD displays used in monitor proofing systems. Calibration hardware and software A colorimeter or spectrophotometer is used in conjunction with special calibration software to adjust the primary RGB monitor gains, set the white point to the desired color temperature and optionally set the monitor luminance to a specified levels. The calibration target for a monitor proofing system is typically D50 (5000K) and should be at least 160 cd/m2 luminance as specified in ISO 12646. Monitor Proofing Application Software Monitor proofing application software integrates the necessary color management tools with a viewing application containing markup, review and approval tools and some form of routing or collaboration. Proofing assets reside in a database and are made available for viewing over LAN or Internet connections via client-server connections. Third Party Certification SWOP and Fogra offer independent third party certifications to ensure that a monitor proofing system is capable of reproducing certain reference printing conditions tied to known and traceable standards. A monitor proof that is prepared in accordance with these certification programs can serve as a contract proof or legal binding agreement between the proof provider and customer. References External links SWOP (Specifications for Web Offset Printing) Fogra Softproof Research - free download of Softproof Handbook and test files Fogra Softproofing System certification ISO (International Organization for Standardization) Printing
9030896
https://en.wikipedia.org/wiki/Winradio
Winradio
WiNRADiO is the brand name for the radio communication equipment of RADIXON Group in Melbourne, Australia, a subsidiary of Robotron Group. It includes computer-based radio receivers, software, antennas and accessories for software-defined radio. This trading name was adopted as a result of market success of the company's first product, the "WinRadio Card". This was an ISA bus card which transformed a Windows-based computer into a wide-coverage communications receiver, making it possible to receive point-to-point communications (ham radio, utilities, police, space research, etc.) on an ordinary personal computer. The company also acts as official distributor of the DRM (Digital Radio Mondiale) demodulator/decoder software, a result of the collaborative project between VT Merlin Communications, Fraunhofer Institute Integrierte Schaltungen (Integrated Circuits) and Coding Technologies. Apart from general-purpose radio, some unusual applications of WiNRADiO receivers have been reported, such as in radio astronomy or search for extraterrestrial intelligence. The Adelaide Hills Radio Telescope (radio astronomy) uses WiNRADiO equipment and software . WiNRADiO receivers are also reportedly applied in audio engineering for stage performances, in particular for spectrum analysis of wireless microphones. Despite the Windows-affiliation implied by the name, the WiNRADiO receivers are also available for Macintosh. A Linux software, LinRadio, has also been developed. However, the last modification of the LinRadio software was in 2002. References External links Amateur radio companies Companies established in 1991 Electronics companies of Australia Manufacturing companies based in Melbourne Australian brands
39214056
https://en.wikipedia.org/wiki/P.%20Anandan
P. Anandan
Padmanabhan Anandan is the ex-CEO of Wadhwani Institute for Artificial Intelligence, an independent not-for-profit Research institute focused on developing artificial intelligence based applications for social good. He was formerly vice president for research at Adobe Systems and prior to that a distinguished scientist and managing director of Microsoft Research. He was managing director at Microsoft Research India, which he founded in January 2005 in Bangalore. He joined Microsoft Research in Redmond, Washington in 1997, where he founded and built the Interactive Visual Media group. He was also previously a professor of Computer Science at Yale University. Education Anandan holds an undergraduate degree in electrical engineering from the Indian Institute of Technology Madras, a master of science in computer science from the University of Nebraska, Lincoln, and a Ph.D. in computer science from the University of Massachusetts, Amherst. Career Anandan's research work has been in computer vision, in the area of visual motion analysis, video surveillance and 3D scene modeling from images and video. He has published over 60 papers in leading journals and conferences, leading to several awards and honors. His own papers published in 1987 and 1991 on these topics as well as his joint paper with his student Michael Black are essential reading in many computer-vision curricula. He has over 17,000 citations by other researchers in the field of computer vision. The "Black and Anandan" method helped popularize robust statistics in computer vision. This was facilitated by several papers that connected robust penalty functions to classical "line processes" used in Markov Random Fields (MRFs) at the time. His research has been used in real world applications in entertainment (movies and games), defense and civilian security. The “Black and Anandan” optical flow algorithm has been widely used, for example, in special effects. The method was used to compute optical flow for the painterly effects in What Dreams May Come , Prince of Egypt and for registering 3D face scans in The Matrix Reloaded. Anandan assumed consecutive posts at Yale University, Sarnoff Corporation and Microsoft Research (MSR). At Yale, he was a founding member of the computer-vision research group. At Sarnoff, he led the video information processing group that invented "video mosaics". Based on a white paper he wrote and with the help of the research community, the US Defense Research Projects Agency started the Video Surveillance and Monitoring research program which funded research at Carnegie Mellon University, MIT and other major research universities. Many of the techniques such as mosaics and moving object detection (and tracking) that were pioneered in the VSAM program and at Sarnoff are now part of various defense and civilian video surveillance and security systems, and several new companies such as ObjectVideo have been formed that use these technologies. At Microsoft Research in Redmond, Anandan helped build one of the leading computer vision research teams in the world. He established the Microsoft Research India laboratory in Bangalore, in January 2005. Anandan has often spoken on research, innovation and technology at forums hosted by organizations such as the Confederation of Indian Industry and the Federation of Indian Chambers of Commerce and Industry, and also by the media. He was part of the working group constituted by the 12th Planning Commission, Government of India, to make recommendations on India's higher education policy. Anandan was on the founding board of governors of IIIT Delhi and the board of Governors of IIT Madras Honors and recognition Distinguished Alumni award, University of Massachusetts, Amherst (2006) Alumni award, IIT Madras (2010) Microsoft Distinguished Scientist,(2010) Microsoft Emeritus Researcher(2020) Hall of Computing, University of Nebraska (2010), University of Nebraska See also VinFuture References External links "‘Science and Nature Are Borderless, Classless’", The Hindu, 20 May 2013 "Microsoft Research India has come of age", Business Today, 8 March 2013 "'Innovate First and Innovate for the World'", Hindu Business Line, 29 January 2012 Microsoft Research India: 2012 in Review, 21 December 2012 "Technologist turns philosopher", The Hindu, 1 October 2011 "Boss' Day Out: Dr P Anandan", NDTV, 10 September 2009 Year of birth missing (living people) Living people Indian computer scientists Businesspeople from Bangalore Scientists from Karnataka Computer scientists
13473328
https://en.wikipedia.org/wiki/SharePoint
SharePoint
SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and its usage varies substantially among organizations. According to Microsoft, SharePoint had 190 million users across 200,000 customer organizations. Editions There are various editions of SharePoint which have different functions. SharePoint Standard Microsoft SharePoint Standard builds on the Microsoft SharePoint Foundation in a few key product areas: Sites: Audience targeting, governance tools, Secure store service, web analytics functionality. Communities: 'MySites' (personal profiles including skills management, and search tools), enterprise wikis, organization hierarchy browser, tags and notes. Content: Improved tooling and compliance for document & record management, managed metadata, word automation services, content type management. Search: Better search results, search customization abilities, mobile search, 'Did you mean?', OS search integration, Faceted Search, and metadata/relevancy/date/location-based refinement options. Composites: Pre-built workflow templates, Business Connectivity Services (BCS) profile pages. SharePoint Standard licensing includes a CAL (client access license) component and a server fee. SharePoint Standard may also be licensed through a cloud model. SharePoint Server SharePoint Server is provided to organizations that seek greater control over SharePoint's behavior or design. This product is installed on customers' IT infrastructure. It receives fewer frequent updates but has access to a wider set of features and customization capabilities. There are two editions of SharePoint Server: Standard and Enterprise. A free version called 'Foundation' was discontinued in 2016. These servers may be provisioned as normal virtual/cloud servers or as hosted services. SharePoint Enterprise Built upon SharePoint Standard, Microsoft SharePoint Enterprise features can be unlocked by providing an additional license key. Extra features in SharePoint Enterprise include: Search thumbnails and previews, rich web indexing, better search results. Business intelligence integration, dashboards, and business data surfacing. PowerPivot and PerformancePoint. Microsoft Office Access, Visio, Excel, and InfoPath Forms services. SharePoint Enterprise Search extensions. SharePoint Enterprise licensing includes a CAL component and a server fee that must be purchased in addition to SharePoint Server licensing. SharePoint Enterprise may also be licensed through a cloud model. SharePoint Online Microsoft's hosted SharePoint is typically bundled in Microsoft 365 subscriptions, but can be licensed separately. SharePoint Online has the advantage of not needing to maintain one's own servers, but as a result lacks the customization options of a self-hosted installation of SharePoint. It is limited to a core set of collaboration, file hosting, and document and content management scenarios, and is updated on a frequent basis, but is typically comparable with SharePoint Enterprise. Currently, additional capabilities include: Support for SharePoint Framework extensions New "Modern" (Responsive) SharePoint UX (partially included in 2016 - Feature Pack 1) Yammer Integration & Office 365 Groups Integration with Outlook Web App Newer versions of Online Office Document Editor Tools Removal of various file size/number limitations Apps Concept Missing capabilities include: Some search & UI customizations Many web publishing capabilities Service Application administration options Many customization/solution types will not run No ability to read error (ULS) logs No ability to share a Site Page (ASPX) to external anonymous visitors; only documents (Word, Excel, Picture, ...) may be shared as such Microsoft lists changes in SharePoint Online on its Office Roadmap. SharePoint online is available in all Microsoft 365 Plans Applications SharePoint usage varies from organization to organization. The product encompasses a wide variety of capabilities, most of which require configuration and governance. The most common uses of the SharePoint include: Enterprise content and document management SharePoint allows for storage, retrieval, searching, archiving, tracking, management, and reporting on electronic documents and records. Many of the functions in this product are designed around various legal, information management, and process requirements in organizations. SharePoint also provides search and 'graph' functionality. SharePoint's integration with Microsoft Windows and Microsoft Office allow for collaborative real-time editing, and encrypted/information rights managed synchronization. This capability is often used to replace an existing corporate file server, and is typically coupled with an enterprise content management policy. Intranet and social network A SharePoint intranet or intranet portal is a way to centralize access to enterprise information and applications. It is a tool that helps an organization manage its internal communications, applications and information more easily. Microsoft claims that this has organizational benefits such as increased employee engagement, centralizing process management, reducing new staff on-boarding costs, and providing the means to capture and share tacit knowledge (e.g. via tools such as wikis). Collaborative software SharePoint contains team collaboration groupware capabilities, including: project scheduling (integrated with Outlook and Project), social collaboration, shared mailboxes, and project related document storage and collaboration. Groupware in SharePoint is based around the concept of a "Team Site". File hosting service (personal cloud) SharePoint Server hosts OneDrive for Business, which allows storage and synchronization of an individual's personal documents, as well as public/private file sharing of those documents. This is typically combined with other Microsoft Office Servers/Services, such as Microsoft Exchange, to produce a "personal cloud", WebDAV can be used to access files without using the web interface. However, Microsoft's implementation of WebDAV doesn't conform to the official WebDAV protocol and therefore isn't compliant to the WebDAV standard. For example, WebDAV applications have to support the language tagging functionality of the XML specification which Microsoft's implementation doesn't. Only Windows XP to Windows 8 are supported. Custom web applications SharePoint's custom development capabilities provide an additional layer of services that allow rapid prototyping of integrated (typically line-of-business) web applications. SharePoint provides developers with integration into corporate directories and data sources through standards such as REST/OData/OAuth. Enterprise application developers use SharePoint's security and information management capabilities across a variety of development platforms and scenarios. SharePoint also contains an enterprise "app store" that has different types of external applications which are encapsulated and managed to access to resources such as corporate user data and document data. Content structure Pages SharePoint provides free-form pages which may be edited in-browser. These may be used to provide content to users, or to provide structure to the SharePoint environment. Web parts and app parts Web parts and app parts are components (also known as portlets) that can be inserted into Pages. They are used to display information from both SharePoint and third-party applications. Content item, Content Type, Libraries, Lists, and "Apps" Content item is a resource in electronic form. Following are some examples: Document: always has a "Name" Contact: may have Email address and/or Phone number. Sales Invoice: may have Customer ID. Content Types are definitions (or types) of Content items. These definitions describe things like what metadata fields a Document, Contact, or Sales invoice may have. SharePoint allows you to create your own definitions based on the built-in ones. Some built in content types include: Contacts, Appointments, Documents, and Folders. SharePoint Library stores and displays Content items of type Documents and Folders. SharePoint List stores and displays data items such as Contacts. Some built-in content types such as 'Contact' or 'Appointment' allow the list to expose advanced features such as Microsoft Outlook or Project synchronization. In SharePoint 2013, in some locations, Lists and Libraries were renamed 'Apps' (despite being unrelated to the "SharePoint App Store"). In SharePoint 2016, some of these were renamed back to Lists and Libraries. Sites A SharePoint Site is a collection of pages, lists, libraries, apps, configurations, features, content types, and sub-sites. Examples of Site templates in SharePoint include: collaboration (team) sites, wiki sites, blank sites, and publishing sites. Configuration and customization Web-based configuration SharePoint is primarily configured through a web browser. The web-based user interface provides most of the configuration capability of the product. Depending on your permission level, the web interface can be used to: Manipulate content structure, site structure, create/delete sites, modify navigation and security, or add/remove apps. Enable or disable product features, upload custom designs/themes, or turn on integrations with other Office products. Configure basic workflows, view usage analytics, manage metadata, configure search options, upload customizations, and set up integration. SharePoint Designer SharePoint Designer is a semi-deprecated product that provided 'advanced editing' capabilities for HTML/ASPX pages, but remains the primary method of editing SharePoint workflows. A significant subset of HTML editing features were removed in Designer 2013, and the product is expected to be deprecated in 2016–7. Microsoft SharePoint's Server Features are configured either using PowerShell, or a Web UI called "Central Administration". Configuration of server farm settings (e.g. search crawl, web application services) can be handled through these central tools. While Central Administration is limited to farm-wide settings (config DB), it provides access to tools such as the 'SharePoint Health Analyzer', a diagnostic health-checking tool. In addition to PowerShell's farm configuration features, some limited tools are made available for administering or adjusting settings for sites or site collections in content databases. A limited subset of these features are available by SharePoint's SaaS providers, including Microsoft. Custom development The SharePoint Framework (SPFX) provides a development model based on the TypeScript language. The technical stack is yeoman, node.js, webstack, gulp, npm. It embraces a modern web technologies development method. It is the only supported way to customize the new modern experience user interface (UI). It has been globally available since mid 2017. It allows a web developer to step into SharePoint development more easily. The SharePoint "App Model" provides various types of external applications that offer the capability to show authenticated web-based applications through a variety of UI mechanisms. Apps may be either "SharePoint-hosted", or "Provider-hosted". Provider hosted apps may be developed using most back-end web technologies (e.g. ASP.net, NodeJS, PHP). Apps are served through a proxy in SharePoint, which requires some DNS/certificate manipulation in on-premises versions of SharePoint. The SharePoint "Client Object Model" (available for JavaScript and .NET), and REST/SOAP APIs can be referenced from many environments, providing authenticated users access to a wide variety of SharePoint capabilities. "Sand-boxed" plugins can be uploaded by any end-user who has been granted permission. These are security-restricted, and can be governed at multiple levels (including resource consumption management). In multi-tenant cloud environments, these are the only customizations that are typically allowed. Farm features are typically fully trusted code that need to be installed at a farm-level. These are considered deprecated for new development. Service applications: It is possible to integrate directly into the SharePoint SOA bus, at a farm level. Customization may appear through: Application-to-application integration with SharePoint. Extensions to SharePoint functionality (e.g. custom workflow actions). 'Web Parts' (also known as "portlets", "widgets", or "gadgets") that provide new functionality when added to a page. Pages/sites or page/site templates. Server architecture SharePoint Server can be scaled down to operate entirely from one developer machine, or scaled up to be managed across hundreds of machines. Farms A SharePoint farm is a logical grouping of SharePoint servers that share common resources. A farm typically operates stand-alone, but can also subscribe to functions from another farm, or provide functions to another farm. Each farm has its own central configuration database, which is managed through either a PowerShell interface, or a Central Administration website (which relies partly on PowerShell's infrastructure). Each server in the farm is able to directly interface with the central configuration database. Servers use this to configure services (e.g. IIS, windows features, database connections) to match the requirements of the farm, and to report server health issues, resource allocation issues, etc... Web applications Web applications (WAs) are top-level containers for content in a SharePoint farm. A web application is associated primarily with IIS configuration. A web application consists of a set of access mappings or URLs defined in the SharePoint central management console, which are replicated by SharePoint across every IIS Instance (e.g. Web Application Servers) configured in the farm. Site collections A site collection is a hierarchical group of 'SharePoint Sites'. Each web application must have at least one site collection. Site collections share common properties (detailed here), common subscriptions to service applications, and can be configured with unique host names. A site collection may have a distinct content databases, or may share a content database with other site collections in the same web application. Service applications Service applications provide granular pieces of SharePoint functionality to other web and service applications in the farm. Examples of service applications include the User Profile Sync service, and the Search Indexing service. A service application can be turned off, exist on one server, or be load-balanced across many servers in a farm. Service Applications are designed to have independent functionality and independent security scopes. Administration, security, compliance SharePoint's architecture enables a 'least-privileges' execution permission model. SharePoint Central Administration (the CA) is a web application that typically exists on a single server in the farm; however, it is also able to be deployed for redundancy to multiple servers. This application provides a complete centralized management interface for web & service applications in the SharePoint farm, including AD account management for web & service applications. In the event of the failure of the CA, Windows PowerShell is typically used on the CA server to reconfigure the farm. The structure of the SharePoint platform enables multiple WAs to exist on a single farm. In a shared (cloud) hosting environment, owners of these WAs may require their own management console. The SharePoint 'Tenant Administration' (TA) is an optional web application used by web application owners to manage how their web application interacts with the shared resources in the farm. Compliance, standards and integration SharePoint integrates with Microsoft Office. SharePoint uses Microsoft's OpenXML document standard for integration with Microsoft Office. Document metadata is also stored using this format. SharePoint provides various application programming interfaces (APIs: client-side, server-side, JavaScript) and REST, SOAP and OData-based interfaces. SharePoint can be used to achieve compliance with many document retention, record management, document ID and discovery laws. SharePoint is compatible with CMIS - the Content Management Interoperability Standard, using Microsoft's CMIS Connector. SharePoint by default produces valid XHTML 1.0 that is compliant with WCAG 2.0 accessibility standards. SharePoint can use claims-based authentication, relying on SAML tokens for security assertions. SharePoint provides an open authentication plugin model. SharePoint has support for XLIFF to support the localization of content in SharePoint. Also added support for AppFabric. Other SharePoint-related Microsoft products History Origins SharePoint evolved from projects codenamed "Office Server" and "Tahoe" during the Office XP development cycle. "Office Server" evolved out of the FrontPage and Office Server Extensions and "Team Pages". It targeted simple, bottom-up collaboration. "Tahoe", built on shared technology with Exchange and the "Digital Dashboard", targeted top-down portals, search and document management. The searching and indexing capabilities of SharePoint came from the "Tahoe" feature set. The search and indexing features were a combination of the index and crawling features from the Microsoft Site Server family of products and from the query language of Microsoft Index Server. GAC-(Global Assembly Cache) is used to accommodate the shared assemblies that are specifically designated to be shared by applications executed on a system. Versions Successive versions (in chronological order): Office Server Extensions SharePoint Portal Server 2001 SharePoint Team Services Windows SharePoint Services 2.0 (free license) and SharePoint Portal Server 2003 (commercial release) Windows SharePoint Services 3.0 (free license) and Office SharePoint Server 2007 (commercial extension) SharePoint Foundation 2010 (free), SharePoint Server 2010 (commercial extension for Foundation), and SharePoint Enterprise 2010 (commercial extension for Server) SharePoint Foundation 2013 (free), SharePoint Server 2013 (extension on top of Foundation), and SharePoint Enterprise 2013. SharePoint Online (Plan 1 & 2). SharePoint Server 2016 and SharePoint Enterprise 2016. SharePoint Server 2019 and SharePoint Enterprise 2019. Notable changes in SharePoint 2010 Changes in end-user functionality added in the 2010 version of SharePoint include: New UI with Fluent Ribbon, using wiki-pages rather than 'web-part pages' and offering multi-browser support. New social profiles, and early social networking features Central Administration rebuilt. Restructure of "Shared Service Providers" - Introduction of "Service Applications" SOA model. Sandboxed Solutions and a client-side object-model APIs for JavaScript, Silverlight, and .NET applications Business Connectivity Services, Claims-based Authentication, and Windows PowerShell support Notable changes in SharePoint 2013 Cross-browser drag & drop support for file uploads/changes, and Follow/Share buttons OneDrive for Business (initially SkyDrive Pro) replaces MySites and Workspaces. Updates to social network feature & new task aggregation tool. Database caching, called Distributed Cache Service Content-aware switching, called Management Audit center (service called eDiscovery) Rebuilt and improved search capabilities Removal of some analytics capabilities UI: JSLink, MDS, theme packs. No WYSIWYG in SP Designer. Notable changes in SharePoint 2016 Sources: Hybrid Improvements Single Sites View Unified Search Search Sensitive Information in Hybrid Search Unified UI (O365) Performance, Scaling & Deployment Improvements Search Scaling Capabilities Site Collection Enhancement Deterministic View Threshold – Removing 5000 Limit Durable Links and Large Files Support Deployment Improvements MinRole Zero Downtime Patching Notable changes in SharePoint 2019 Sources: Modern sites and page layouts Communication sites Large File Support, Character Restrictions, and File/Folder Names See also Enterprise portal List of collaborative software List of content management systems References External links SharePoint Roadmap 2001 software Content management systems Document management systems Information management Portal software Proprietary database management systems Records management technology Microsoft Office servers
21559539
https://en.wikipedia.org/wiki/Computer-aided%20ergonomics
Computer-aided ergonomics
Computer-aided ergonomics is an engineering discipline using computers to solve complex ergonomic problems involving interaction between the human body and its environment. The human body holds a great complexity thus it can be beneficial to use computers to solve problems involving the human body and the environment that surrounds it. Overview Computer-aided ergonomics is an interdisciplinary field of work that involves the use of a computer to solve complex problems regarding a human body interacting with an environment. As the title reveals, having a computer to help find the best ergonomic solution in a given situation. Ergonomics traditionally involves many disciplines including biomechanics, anthropometry, mechanical engineering, industrial engineering, kinesiology, health sciences and physiology. Due to the highly interdisciplinary and complex nature of ergonomics it is hard to get a full understanding of a situation. As the human body is a complex system, thus it is beneficial to have a computer system that models the human body as a mechanical system. The human body contains several parts that can be modeled as known mechanical systems, for example bones connected with joints and driven by actuators (muscles). Example of a system One example of a computer system that can be used as a computer-aided ergonomics system is The AnyBody Modeling System that consider the human body as a dynamic multi-rigid-body system. The human model is a public domain model contains most of the bones, muscles and joints that are present in the human body. The model has more than 1000 muscle elements, and many muscle elements have been modeled with detailed muscle model theory described by Hill, A.V. in 1938. The muscle model contains information including physiological cross-sectional area, length, ratio of red and white fibers. The AnyBody modeling system is capable of modeling almost any human voluntary movement or static situation. One example of a model, could be a seated model, where the human body is placed in a chair, that have a seat, backrest, headrest, leg rest, footrest, and armrest. The model can then calculate the forces acting between the human body and the chair, as well as for example the forces between any given spinal vertebra. This could be used for finding the optimal seated posture for a person, who suffers from lower back pain, assuming that greater load on a vertebra result in greater pain. Benefits of computer-aided ergonomics The question is “In what scenario” it would be beneficial to use computer-aided ergonomics compared to traditional ergonomics. First of all computer aided ergonomics using for example a musculo-skeletal modeling system as The AnyBody Modeling system, would be beneficial in physical ergonomics, which traditional combines aspects from the human anatomical, anthropometric, physiological and biomechanical characteristics related to some physical activity. The model can provide a quantitative foundation for ergonomic design and recommendations. Traditionally “Ergonomics” has been based on recommendations derived from empirical data from various working tasks; if many people get injured from working in a certain posture, it is recommended to avoid working in that posture. However, when applying the recommendations to another related working posture, the posture or the movement often does not match exactly. This means that the theory and recommendations does not apply to the new situation. In this case it could be beneficial to model the situation in order to find out how the reaction forces and muscle activities differ from the first situation, where the recommendations were based on empirical data. A combination of risk factors can be derived from the model output. For example, when designing an office chair, one would like to design it to fulfill several demands; comfortable, relaxing, supporting and so on. Some of the criteria related to the demands might be conflicting for example; comfort is often related to the shear force on the seat, which should be kept as low as possible. The seat shear force could be removed by making a horizontal seat and rising the backrest to 90 degrees however this would not be relaxing. Therefore, a combination of seat and backrest angles needs to be considered in order to find optimal seated postures related to only the two design variables. Computer-aided ergonomics is an interdisciplinary field of work, that involve the use of a computer to solve complex problems that involve a person interacting with an environment. As the title reveals, it is all about having a computer to help finding the best ergonomic solution. Ergonomics involves many disciplines, such as biomechanics, anthropometry, mechanical engineering, industrial engineering, kinesiology, health sciences and physiology. Due to the highly interdisciplinary it is hard to get a full understanding of a situation based on knowledge, unless the knowledge in some way is built into a computer system. Ergonomics Design for X
51693171
https://en.wikipedia.org/wiki/Abacus%20Data%20Systems
Abacus Data Systems
Abacus Data Systems DBA AbacusNext is an American software and private cloud services provider headquartered in San Diego, California. Services and Software AbacusNext offers the following: Cloud Abacus Private Cloud is a private cloud sold as Desktop-as-a-Service within a Windows Server 2012 R2 environment. This service relies on Veeam Software for backup support, and is built on Intel-based architecture. This private cloud service is colocated across three carrier-neutral data centers that are interconnected via a self-run optical network. These data centers are Scalematrix in San Diego, California, Skybox Houston One in Houston, Texas, and SUPERNAP by Switch in Las Vegas, Nevada Abacus Next or Abacus Data Systems has a 1 star rating on the Better Business Bureau website. Software AbacusNext offers four Windows-based platforms, all of which rely on relational database management system's. Two are law practice management software for lawyers; AbacusLaw uses Advantage Database, while Amicus Attorney uses Microsoft SQL Server. AbacusNext has two platforms for accounting firms that rely on Microsoft SQL Server: ResultsCRM, a customer relationship management for QuickBooks, and OfficeTools practice management software. Management Scott Johnson, CEO (former CEO of Zephyr, which was acquired by Smartbear, and former CEO of Social Solutions) Mike Skelly, CFO (former CFO of Trio Health, UpWind Solutions, and Active Network) In December 2015, American private equity investment firm Providence Equity Partners took a strategic majority investment in AbacusNext (Abacus Data Systems at that time). Acquisitions May 2016: acquired Toronto-based software company Gavel & Gown, makers of Amicus Attorney. February 2017: acquired Virginia-based software company Results Software, makers of ResultsCRM February 2017: acquired San Diego-based cloud hosting provider Cloudnine Realtime. May 2017: acquired Palmdale-based software company OfficeTools. November 2017: acquired Scotland-based software company HotDocs. HotDocs was previously owned by LexisNexis before being sold in 2009. December 2017: acquired New Hampshire-based software company Commercial Logic. References External links Official website Software companies of the United States Cloud applications Cloud computing providers Companies established in 1983 Software companies based in California Technology companies based in San Diego
24640261
https://en.wikipedia.org/wiki/Steve%20Whittaker
Steve Whittaker
Steve Whittaker is a Professor in Human-Computer Interaction at the University of California Santa Cruz. He is best known for his research at the intersection of computer science and social science in particular on computer mediated communication and personal information management. He is a Fellow of the ACM, and winner of the CSCW 2018 "Lasting Impact" award. He also received a Lifetime Research Achievement Award from SIGCHI, is a Member of the SIGCHI Academy. He is Editor of the journal Human Computer Interaction.. Life He was born in Liverpool in the UK, in 1957. As an undergraduate he studied Natural Sciences at Cambridge, obtaining his PhD in Cognitive Psychology at St. Andrews. He spent many years in industry where he worked at Hewlett-Packard Labs, AT&T Labs, and IBM Research Labs. Moving to academia he was Professor of Information Science at University of Sheffield, before relocating to the University of California in 2009. Research He publishes in the fields of Human Computer Interaction and Computer Supported Co-operative Work. His applies social science theory to understand people's interactions with technologies, using these insights to design new human-centric technologies. His early research focused on computer mediated communication, extending psychological theories of conversation to develop new accounts of online interaction. That work led to the design of novel collaboration, messaging and social computing technologies some which have now become standard. He has also researched Personal information management. He co-published a book with Ofer Bergman: The Science of Managing Our Digital Stuff which uses cognitive psychology to understand how we organize and access our personal digital information. He was also one of the first to document how email contributes to information overload, proposing technical approaches to address this. More recently his work examined 'digital memory', critiquing Lifelogging approaches and developing new techniques for understanding and reflecting on our pasts. Awards ACM Fellow (2015). ACM SIGCHI Lifetime Research Award (2014). ACM SIGCHI Academy (2008). ACM CSCW Lasting Impact Award (2018). Editor, Human Computer Interaction (2013–present) Selected bibliography Whittaker, S. and Sidner, C. (1996). Email overload: exploring personal information management of email. In Proceedings of CHI'96 Conference on Computer Human Interaction, 276–283, NY: ACM Press. https://dl.acm.org/doi/10.1145/238386.238530 Whittaker, S., and O'Conaill, B. (1997). The role of vision in face-to-face and mediated communication. In In K. Finn, A. Sellen, S. Wilbur (Eds.), Video mediated communication. LEA: NJ. https://psycnet.apa.org/record/1997-08440-001 Whittaker, S. Terveen, L., Hill, W., and Cherny, L. (1998). The dynamics of mass interaction, In Proceedings of Conference on Computer Supported Cooperative Work, 257–264. NY: ACM Press. https://dl.acm.org/doi/10.1145/289444.289500 Nardi, B., Whittaker, S., Bradner, E. (2000). Interaction and Outeraction: Instant Messaging in Action. In Proceedings of Conference on Computer Supported Cooperative Work, 79–88. New York: ACM Press. https://dl.acm.org/doi/10.1145/358916.358975 Whittaker, S., Terveen, L., and Nardi, B. (2000). Let's stop pushing the envelope and start addressing it: a reference task agenda for HCI. Human Computer Interaction, 15, 75-106. https://dl.acm.org/doi/10.1207/S15327051HCI1523_2 Whittaker, S. (2002). Theories and Methods in Mediated Communication. In Graesser, A., Gernsbacher, M., and Goldman, S. (Ed.) The Handbook of Discourse Processes, 243–286, Erlbaum, NJ. https://psycnet.apa.org/record/2003-02476-006 Sellen, A., and Whittaker, S. (2010). Lifelogging: What Are We Doing and Why Are We Doing It? Communications of the ACM, Vol. 53, No. 5, 70–77. https://cacm.acm.org/magazines/2010/5/87249-beyond-total-capture/fulltext Whittaker. S. (2011). Personal Information Management: From Consumption to Curation In B. Cronin (Ed.) Annual Review of Information Science and Technology, 45, 1-42, Wiley, Medford, NJ. DOI: 10.1002/aris.2011.1440450108. https://doi.org/10.1002/aris.2011.1440450108 Whittaker, S., Matthews, T., Cerruti, J., Badenes, H., and Tang, J. (2011). Am I wasting my time organizing email?: a study of email refinding. In Proceedings of the 2011 Conference on Human factors in computing systems (CHI '11). ACM, New York, NY, USA, 3449–3458. https://dl.acm.org/doi/10.1145/1978942.1979457 Bergman, O. and Whittaker, S (2016). The Science of Managing Our Digital Stuff, Cambridge, MIT Press. https://mitpress.mit.edu/books/science-managing-our-digital-stuff References External links Human Computer Interaction Journal Whittaker's personal homepage Human–computer interaction researchers Alumni of the University of Cambridge Alumni of the University of St Andrews Academics of the University of Sheffield Living people Year of birth missing (living people)
803458
https://en.wikipedia.org/wiki/Filename%20mangling
Filename mangling
The process of filename mangling, in computing, involves a translation of the file name for compatibility at the operating system level. It occurs when a filename on a filesystem appears in a form incompatible with the operating system accessing it. Such mangling occurs, for example, on computer networks when a Windows machine attempts to access a file on a Unix server and that file has a filename which includes characters not valid in Windows. FAT Derivative Filesystem Legacy support under VFAT A common example of name mangling occurs on VFAT file systems on versions of Windows from Windows 95 onwards. The VFAT specification allows Long File Names (LFNs). For backwards-compatibility with MS-DOS and older Windows software, which recognizes filenames of a maximum of 11 characters in length with 8.3 format (i.e.: an eight-letter filename, a dot and a three-letter extension, such as autoexec.bat), files with LFNs get stored on disk in 8.3 format (longfilename.txt becoming longfi~1.txt), with the long file name stored elsewhere on the disk. Normally, when using compatible Windows programs which use standard Windows methods of reading the disk, the I/O subsystem returns the long filename to the program — however, if an old DOS application or an old Windows application tries to address the file, it will use the older, 8.3-only APIs, or work at a lower level and perform its own disk access, which results in the return of an 8.3 filename. In this case, the filenames become mangled by taking the first six non-space characters in the filename and adding a tilde (~) and then a number to ensure the uniqueness of the 8.3 filename on the disk. This mangling scheme can turn (for example) Program Files into PROGRA~1. This technique persists today when people use DOSBox to play classic DOS games or use Windows 3.1 in conjunction to play Win16 games on 64-bit Windows. Unix Filesystems Unix file names can contain colons or backslashes, whereas Windows interprets such characters in other ways. Accordingly, software could mangle the Unix file "Notes: 11\04\03" as "Notes_ 11-04-03" to enable Windows software to remotely access the file. Other Unix-like systems, such as Samba on Unix, use different mangling systems to map long filenames to DOS-compatible filenames (although Samba administrators can configure this behavior in the config file). Mac OS macOS's Finder displays instances of ":" in file and directory names with a "/". This is because the classic Mac OS used the ":" character internally as a path separator. Listing these files or directories using a terminal emulator displays a ":" rather than the "/" character, though. References Computer file systems Computer files
48245974
https://en.wikipedia.org/wiki/Akeneo
Akeneo
Akeneo is a technology company that develops product information management (PIM) and product data intelligence software to improve customer experience. The company was founded in 2013 by Frédéric de Gombert, Benoit Jacquemont, Nicolas Dupont and Yoav Kutner. Akeneo is headquartered in Nantes, France and has offices in the United States, United Kingdom, Germany, Spain, Italy, Israel and Australia with more than 200 employees. It has worked with customers including Shop.com, Fossil, Midland Scientific, Air Liquide and Auchan. History Before founding Akeneo, Frédéric de Gombert, Benoit Jacquemont and Nicolas Dupont had worked together at the open source company, Smile. They began developing a single product database to compete with Excel and brought Yoav Kutner onboard in founding Akeneo. The name comes from the Greek word akene, like the achenes' fruits that contains all the fruits' information and spread them along the winds. The company started producing a system for managing product information for use across distribution channels. Available under the open source license 3.0, the first public beta of Akeneo PIM was released in September 2013. In January 2016, the company's chief executive officer, Frédéric de Gombert, received the prize Young Entrepreneur Prize in Digital from the French newspaper, La Tribune. Akeneo was a co-founder and partner of the Open Source School created by Smile and the École privée des sciences informatiques. The company also opened offices in Düsseldorf, Germany and Boston, Massachusetts during early 2016. The Deloitte Technology Fast 500 named Akeneo on its EMEA list for 2017. Akeneo expanded into Israel after acquiring Sigmento, product data automation startup company. The acquisition worked to merge Sigmento's technology with Akeneo's products for data enrichment automation. The company also opened offices in Spain, Poland and the United Kingdom during early 2018. By late 2018, Akeneo had partnered with Magento to integrate its PIM software with the e-commerce system. In March 2019, the company expanded into Italy and later announced that it would be opening an office in Australia in 2020. Akeneo was recognized as a leader in product information management applications on an IDC Marketscape report in late 2019. Funding Before beginning official funding, KIMA Venture and Nestadio Capital contributed €350,000 to the company in 2013. In September 2014, Akeneo received initial funding of $2.3 million led by Alven Capital. The company raised a Series B funding round of $13 million led by Partech Ventures in March 2017. Salesforce Ventures made a strategic investment in the company as it continued to expand in North America. It completed a $46 million Series C funding round in September 2019 led by Summit Partners. Products Akeneo's products include two versions of its open-source PIM software: a free community edition (CE) as well as an enterprise software as a service (SaaS) edition (EE). More than 70,000 companies use the free community edition (CE) version. Akeneo PIM open source software is built under license OSL 3.0 and written in PHP. In addition, the company launched additional modules for onboarding product information from suppliers, the Akeneo Onboarder, content syndication and AI-based curation of product content capabilities. In February 2020, the company released PIM 4.0 that included features such an asset manager, data quality insights, product attribute mapping and connection management by using API-based integration. Akeneo PIM is featured in the leader quadrant on the G2 PIM vendor grid. Versions Note that the names of CE releases are based on Bugs Bunny episodes, and EE releases on vegetals that have akenes. References External links Official website Free content management systems Software companies of France Software using the Open Software License
19714462
https://en.wikipedia.org/wiki/IET%20Mountbatten%20Medal
IET Mountbatten Medal
The IET Mountbatten Medal is awarded annually for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and their application. The Medal was established by the National Electronics Council in 1992 and named after Louis Mountbatten, The Earl Mountbatten of Burma, Admiral of the Fleet and Governor-General of India. Since 2011, the medal has been awarded as one of the IET Achievement Medals. Eligibility One of the IET's Prestige Achievement Medals, the Medal is awarded to an individual for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and in the dissemination of the understanding of electronics and information technology to young people, or adults. Criteria In selecting a winner, the Panel give particular emphasis to: the stimulation of public awareness of the significance and value of electronics; spreading recognition of the economic significance of electronics and IT, and encouraging their effective use throughout industry in general; encouraging excellence in product innovation and the successful transition of scientific advances to wealth-creating products; recognising brilliance in academic and industrial research; encouraging young people of both sexes to make their careers in the electronics and IT industries; increasing the awareness of the importance of electronics and IT amongst teachers and others in the educational disciplines. Recipients See also List of computer science awards List of computer-related awards List of engineering awards List of awards named after people References External links Awards established in 1992 British science and technology awards Computer science awards Engineering awards Institution of Engineering and Technology
4534553
https://en.wikipedia.org/wiki/Host%20%28network%29
Host (network)
A network host is a computer or other device connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other hosts on the network. Hosts are assigned at least one network address. A computer participating in networks that use the Internet protocol suite may also be called an IP host. Specifically, computers participating in the Internet are called Internet hosts. Internet hosts and other IP hosts have one or more IP addresses assigned to their network interfaces. The addresses are configured either manually by an administrator, automatically at startup by means of the Dynamic Host Configuration Protocol (DHCP), or by stateless address autoconfiguration methods. Network hosts that participate in applications that use the client–server model of computing, are classified as server or client systems. Network hosts may also function as nodes in peer-to-peer applications, in which all nodes share and consume resources in an equipotent manner. Origins In operating systems, the term terminal host denotes a time-sharing computer or multi-user software providing services to computer terminals, or a computer that provides services to smaller or less capable devices, such as a mainframe computer serving teletype terminals or video terminals. Other examples of this architecture include a telnet host connected to a telnet server and an xhost connected to an X Window client. The term Internet host or just host is used in a number of Request for Comments (RFC) documents that define the Internet and its predecessor, the ARPANET. RFC 871 defines a host as a general-purpose computer system connected to a communications network for "... the purpose of achieving resource sharing amongst the participating operating systems..." While the ARPANET was being developed, computers connected to the network were typically mainframe computer systems that could be accessed from dumb terminals connected via serial ports. Since these terminals did not host software or perform computations themselves, they were not considered hosts as they were not connected to any IP network, and were not assigned IP addresses. User computers connected to the ARPANET at a packet-switching node were considered hosts. Nodes, hosts, and servers A network node is any device participating in a network. A host is a node that participates in user applications, either as a server, client, or both. A server is a type of host that offers resources to the other hosts. Typically a server accepts connections from clients who request a service function. Every network host is a node, but not every network node is a host. Network infrastructure hardware, such as modems, Ethernet hubs, and network switches are not directly or actively participating in application-level functions, and do not necessarily have a network address, and are not considered to be network hosts. See also Communication endpoint End system Port (computer networking) Terminal (telecommunication) References External links Networking hardware
29101775
https://en.wikipedia.org/wiki/Veramark%20Technologies
Veramark Technologies
Veramark Technologies, Inc. provided services and software for Telecom Expense Management and call accounting. The company "specialized in controlling telecom expenses by managing a company's voice, data, and wireless services through a combination of auditing, consulting and software". Veramark, originally known as MOSCOM Corporation, was founded in 1983. In 2001, the company sold all the rights for the VeraBill product line, a mediation, provisioning and billing solution for wireline and wireless mid-size carriers, to Mind CTI Ltd. for US$ $1 million. The company acquired the enterprise TEM and consulting businesses of Source Loop LLC, a Georgia-based telecom service provider, in June 2010. In September 2010, the company moved into 23,000 feet of new office space at Eagle's Landing Business Park in Henrietta, New York. In a September 2010 State of the Industry Report published by AOTMP, Veramark was ranked among the top 25 suppliers of TEM and Wireless Mobility Management solutions. In December 2013, Veramark Technologies merged with PINNACLE and Movero to form Calero. Products and services Veramark TEM services included licensed software deployments, hosted Software-as-a-Service (SaaS) agreements, and fully managed services. The VeraSMART Telecom Expense Management software suite included capabilities for managing contracts and sourcing, ordering and provisioning, invoices and disputes, inventory, usage, and process automation. VeraSMART eCAS Call Accounting Software has been rated compliant with Release 6 of the Avaya Communication Server IP PBX References Companies based in Monroe County, New York Companies established in 1983 Software companies based in New York (state) Telecommunications companies of the United States Software companies of the United States
294400
https://en.wikipedia.org/wiki/MacWrite
MacWrite
MacWrite is a WYSIWYG word processor application released along with the first Apple Macintosh systems in 1984. Together with MacPaint, it was one of the two original "killer applications" that propelled the adoption and popularity of the GUI in general, and the Mac in particular. MacWrite was spun off to Claris, which released a major update in 1989 as MacWrite II. A further series of improvements produced 1993's MacWrite Pro, but further improvements were few and far between. By the mid-1990s, MacWrite was no longer a serious contender in the word processing market, development ended around 1995, and it was completely discontinued in 1998 due to dwindling sales. History Development When the Mac was first being created, it was clear that users would interact with it differently from other personal computers. Typical computers of the era booted into text-only DOS or BASIC command line environments, requiring the users to type in commands to run programs. Some of these programs may have presented a graphical user interface of their own, but on the Mac, users would instead be expected to stay in the standard GUI both for launching and running programs. Having an approachable, consistent GUI was an advantage for the Mac platform, but unlike prior personal computers, the Mac was sold with no programming language built-in. This presented a problem to Apple: the Mac was due to be launched in 1983 (originally), with a new user interface paradigm, but no third-party software would be available for it, nor could users easily write their own. Users would end up with a computer that did nothing. In order to fill this void, several members of the Mac team took it upon themselves to write simple applications to fill these roles until third-party developers published more full-fledged software. The result was MacWrite and MacPaint, which shipped free with every Macintosh from 1984 to 1986. The MacWrite development team was a company called Encore Systems, founded and led by Randy Wigginton, one of Apple's earliest employees, and included Don Breuner and Ed Ruder (co-founders of Encore Systems and also early Apple employees; Gabreal Franklin later joined Encore Systems as President.) Wigginton, who had left Apple in 1981, maintained a relationship with many Apple employees, many of whom were on the Macintosh development team. He agreed to lead the MacWrite development team on a semi-official basis. Before it was released, MacWrite was known as "Macintosh WP" (Word Processor) and "MacAuthor". Allegedly, Steve Jobs was not convinced of his team's abilities, and secretly commissioned another project just to be sure; its development was eventually released as WriteNow. Early versions The first versions of MacWrite were rather limited, supporting only the most basic editing features and able to handle just a few pages of text before running into performance problems. (Early versions of MacWrite held the entire document in memory, and early versions of the Macintosh had relatively little free memory.) Nevertheless, it increased user expectations of a word processing program. MacWrite established the conventions for a GUI-based word processor, with such features as a toolbar for selecting paragraph formatting options, font and style menus, and a ruler for tabs, margins, and indents. Similar word processors followed, including the first GUI version of Microsoft Word and WriteNow, which addressed many of MacWrite's limitations while adhering to much the same user interface. The original Mac could print to a dot matrix printer called the ImageWriter, but quality was only adequate. The later LaserWriter laser printer allowed dramatically better output, at a price. However, the possibilities of the GUI/MacWrite/LaserWriter combination were obvious and this, in turn, spurred the development of desktop publishing, which became the "killer app" for the Mac and GUIs in general. MacWrite's inclusion with the Macintosh discouraged developers from creating other word processing software for the computer. Apple unbundled the software with the introduction of the Macintosh Plus, requiring customers to purchase it for the first time. Strong sales continued, and Apple eventually let MacWrite and MacPaint languish with no development resources assigned to improving them. Unfortunately this plan backfired. Users flooded Apple with complaints, demanding newer versions that would keep pace with new features in the Mac, while at the same time developers flooded Apple with complaints about there being any possibility of an upgrade. Apple finally decided the only solution was to spin off the products as a separate company, Claris. MacWrite II Claris formed in 1987 and re-released the existing versions of the Apple products under their own name. Initially it seemed Claris was as uninterested in developing MacWrite as Apple had been. Several minor upgrades were released to allow MacWrite to run on newer versions of the classic Mac OS, but few other problems were addressed. Things changed in the later 1980s with the introduction of MacWrite II. The main changes for this release were an updated user interface, a number of new "style" capabilities, and the inclusion of Claris' file translator technology, XTND. MacWrite II was the first really new version of the software, and was based on a word processing engine purchased from Quark, Inc. By 1989 Word already dominated the Mac with about 60% market share, but the introduction of MacWrite II changed things dramatically; by 1990 Word had dropped to about 45% of the market, and MacWrite had risen to about 30%. This seemed to demonstrate that it would be worth developing further, but Claris did not respond quickly with updated versions. Microsoft, on the other hand, did, and soon introduced Word 4.0. MacWrite's share once again started to erode. MacWrite Pro In the late 1980s, Claris started a massive upgrade series to produce the "Pro" line of products. The main change would be to integrate all of their products with a consistent GUI based on that of FileMaker. This included a common toolbar running down the left side of the screen, and a number of standardized tool palettes. In addition, the Pro series also used common international spelling dictionaries and a thesaurus. The result was a suite of products that all looked and worked the same way, and were able to read and write each other's formats. The resulting MacWrite Pro, released in early 1993, was a major upgrade from previous versions. Reviewers almost universally praised the new release as offering all the required tools while still being very easy to use. However, development had been slow; one developer claimed it was primarily due to extremely demanding quality assurance requirements. By the time MacWrite Pro was released, Word completely dominated the word processor market. Pro did little to address MacWrite's rapidly dwindling market share, which briefly stabilized at about 5% of the market before starting to slide again. Sales were apparently dismal, and it was one of the first products Claris abandoned in the mid-1990s. The word-processing module of AppleWorks is very similar to MacWrite Pro. While it was written entirely from scratch, it retained some of the design limitations of MacWrite Pro. However, later versions of AppleWorks are unable to read older MacWrite Pro files. Reception In a survey of five Macintosh word processors, Compute!'s Apple Applications in 1987 wrote that "once a bold pioneer, MacWrite now seems frozen in time ... it lags behind other word processors in power and responsiveness, and it's clearly unsuited for outlining, layout, and other advanced tasks". Version history See also List of word processors Pages, the word processor in Apple's iWork suite References Citations Bibliography Stan Liebowitz, "Word processors", University of Texas External links MacWrite at mac512.com (Archived version) Word processors (shows a chart indicating MacWrite II's brief but meteoric rise in market share) 1984 software Discontinued software Classic Mac OS software Classic Mac OS word processors Classic Mac OS-only software made by Apple Inc.
2440001
https://en.wikipedia.org/wiki/SFlow
SFlow
sFlow, short for "sampled flow", is an industry standard for packet export at Layer 2 of the OSI model. sFlow was originally developed by InMon Corp. It provides a means for exporting truncated packets, together with interface counters for the purpose of network monitoring. Maintenance of the protocol is performed by the sFlow.org consortium, the authoritative source of the sFlow protocol specifications. The current version of sFlow is v5. Operation sFlow uses mandatory sampling to achieve scalability and is, for this reason, applicable to high speed networks (gigabit per second speeds and higher). sFlow is supported by multiple network device manufacturers and network management software vendors. An sFlow system consists of multiple devices performing two types of sampling: random sampling of packets or application layer operations, and time-based sampling of counters. The sampled packet/operation and counter information, referred to as flow samples and counter samples respectively, are sent as sFlow datagrams to a central server running software that analyzes and reports on network traffic; the sFlow collector. Flow samples Based on a defined sampling rate, an average of 1 out of n packets/operations is randomly sampled. This type of sampling does not provide a 100% accurate result, but it does provide a result with quantifiable accuracy. Counter samples A polling interval defines how often the network device sends interface counters. sFlow counter sampling is more efficient than SNMP polling when monitoring a large number of interfaces. sFlow datagrams The sampled data is sent as a UDP packet to the specified host and port. The official port number for sFlow is port 6343. The lack of reliability in the UDP transport mechanism does not significantly affect the accuracy of the measurements obtained from an sFlow agent. If counter samples are lost then new values will be sent when the next polling interval has passed. The loss of packet flow samples results in a slight reduction of the effective sampling rate. The UDP payload contains the sFlow datagram. Each datagram provides information about the sFlow version, the originating device’s IP address, a sequence number, the number of samples it contains and one or more flow and/or counter samples. sFlow versions Related technologies A well known alternative is NetFlow (see below). Moreover, depending on the IT resources available it could be possible to perform full packet captures using dedicated network taps (which are then subsequently analysed). NetFlow, IPFIX NetFlow and IPFIX are flow export protocols that aim at aggregating packets into flows. After that, flow records are sent to a collection point for storage and analysis. sFlow, however, has no notion of flows or packet aggregation at all. sFlow allows for exporting packet data chunks and interface counters, which are non-typical features of flow export protocols. Note however that (recent) IPFIX developments provide a means for exporting SNMP MIB variables and packet data chunks. While flow export can be performed with 1:1 sampling (i.e., considering every packet), this is typically not possible with sFlow, as it was not designed to do so. Sampling forms an integral part of sFlow, aiming to provide scalability for network-wide monitoring. See also NetFlow Network Management Packet analyzer RMON References External links Official site Differences between Sflow vs Netflow Network management Computer network analysis
35835575
https://en.wikipedia.org/wiki/One%20Horizon%20Group
One Horizon Group
One Horizon Group, Plc. is a technology company that licenses software. History One Horizon Group started in the banking software industry operating as Abbey Technology that developed and licensed its software to banks in Switzerland and also licensed its software through banking software distributor Finnova AG. The core of the Abbey Technology suite of products was a proprietary web application server with customized add-on components designed and delivered to customers requirements. In November 2010, Abbey Technology merged with Satcom Group Holdings and Brian Collins joined the board of the new company as Chief Technology Officer while continuing as CEO of the wholly owned subsidiary Abbey Technology. The company focused its R&D efforts on developing the Horizon platform for mobile network operators. Shortly after the merger, on 21 January 2011, the Company changed its name from SatCom Group Holdings to One Horizon Group. In 2012, the company announced plans for an initial public offering in Hong Kong. In October, the company sold the Satcom Global business and all the subsidiaries involved in satellite communications to Broadband Satellite Services Limited. In November, a Securities Exchange Agreement was completed between Intelligent Communication Enterprise Corporation and One Horizon Group. Following this, in December, the American company filed to change its name from Intelligent Communication Enterprise Corporation to One Horizon Group, Inc. FINRA approved this change, effective 31 January 2013. Along with the name change, Intelligent Communication Enterprise Corporation also filed with Finra to change its ticker symbol from ICMC to OHGI, this was also approved. On 17 September 2013, the company opened a new software research and development office in the Nexus Innovation Centre on the campus of the University of Limerick in Ireland. On 2 July 2014, the company received approval by NASDAQ's Listing Qualifications Department to list its common stock on the NASDAQ Capital Market. The common stock commenced trading on the NASDAQ Capital Market on 3 July 2014 under the same ticker symbol OHGI. On August 11, 2017, the Company consummated a Stock Purchase Agreement pursuant to which Brian J. Collins, the Company’s previous Chief Executive Officer, acquired the outstanding capital stock in four of the Company’s subsidiaries Horizon Globex GmbH, Horizon Globex Ireland Ltd., One Horizon Group Plc. and Abbey Technology GmbH. Upon conclusion of the Stock Purchase Agreement Mr. Collins resigned from all positions held within One Horizon Group, Inc. and took ownership of One Horizon Group Plc. On 26 September 2019, One Horizon Group, Inc. has announced it has signed a binding agreement to acquire Midnight Gaming, a leading brand in the entertainment space of Esports, subject to the terms of closing. The Company also announced an immediate effective name change to Touchpoint Group Holdings, Inc. which represents the expanded focus of the Company on digital marketing and entertainment. References External links Software companies of the United Kingdom
39366634
https://en.wikipedia.org/wiki/Manolis%20Sfakianakis
Manolis Sfakianakis
Manolis Sfakianakis (; born 9 February 1963) is a Greek retired police officer of the Hellenic Police who served as the Head of the Cyber Crime Division (Greek: ) of the Hellenic Police from 2004 to 2016, attaining the rank of Police Lieutenant General (Retired) in 2017. Early life Manolis Sfakianakis (or Emmanouil, Manos, Greek: ) was born, in Chania, Crete, and was raised in the village of Spilia (Greek: ). He has a sister and a brother. He attended Technical Lyceum of Chania (now the Vocational Lyceum of Chania) with a chosen subject area in electronics, and then enrolled at the private secondary vocational school Korelko (now IEK Korelko), in Athens in the 2-year program in computer programming. In 1982 he entered into the Enomοtarchon School of the Hellenic Gendarmerie (Greek: ; now-defunct) in Rhodes, which was a Non-Commissioned Officers’ Academy. After a 10-month program he graduated as Sergeant Major (Greek: ). Subsequently, from 1986 to 1990, he attended the Officers' School of the Hellenic Police Academy in Athens. Career Assignments to the positions include, the Police Department of the village of Salakos in Rhodes island; Police Department of Ialysos in Rhodes; Police Department of Agios Nikolaos, Chalkidiki; Police Security Department of Syntagma in Athens in 1992 where at first he served as the Deputy Director and then from the following year as Director, succeeding the previous Director who dismissed from it; Financial Crime Unit of the Security Directorate of Attica in 1995; Subsequently, he was assigned Head of the Cyber Crime Unit (CCU) in the time of Unit's founding in 2004 hosted in the General Police Directorate of Attica (GADA) and, from then on, he remained to hold that assignment for almost twelve consecutive years. While in charge of the Unit, he took part in the Unit’s 2011 reform when it was upgraded into a Subdivision renamed Cyber Crime Subdivision and later in the 2014 upgraded into a Division changed its name to the Cyber Crime Division in order to full its operational standards in dealing with the aftermath of the technological advancements and greater internet accessibility which has seen a growth in crimes committed over the internet. Although, since the founding in 2004, it still continues to be colloquially known to as Cyber Crime Unit or Cyber Crime Center. By the time he has promoted to the rank of Brigadier General in a press release of 2 March 2013, he assigned Deputy Head of the Authority of Financial Police and Cyber Crime Subdivision () where he held concurrently with that of Head of the Cyber Crime Subdivision. In a press release issued by the Hellenic Police HQ on 18 February 2016 he was assigned Assistant to the Chief of Staff of the Hellenic Police Headquarters where he was responsible to the Chief of Staff Lieutenant General Zacharoula Tsirigoti. In a press release on 21 January 2017 was issued by the Hellenic Police HQ announced that he promoted to the rank of Police Lieutenant General (Retired). Throughout the years that he has been at the Cyber Crime Unit, the Unit has intervened in a number of suicide attempts. Additionally, on 24 October 2011 the Cyber Crime Unit launched a mobile application, accompanied with its respective website, called CyberKid funded through private sponsorship by the Wind Hellas providing useful information to children and internet users. The CyberKid application for portable devices (smartphone, tablet) is available without charge to download from the Google Play, App Store and Microsoft Store that users can directly contact the Cyber Crime Division in the event of a cyber case incident. The CyberKid mobile application is an initiative of Hellenic Police Headquarters with Ministry of Public Order and Citizen Protection implemented by the Cyber Crime Division. Awards On 22 December 2015, the Special Award (Greek name: ) of the Class of Ethics and Political Sciences was bestowed on Manolis Sfakianakis in recognition of his contribution and social work to Cyber Crime Division, from the Academy of Athens at its annual awards event. The ceremony took place during the formal sitting in the Great Ceremony Hall of the Academy of Athens. On 20 November 2014, the UNICEF Greece 2014 Award was bestowed both on Cyber Crime Division of the Hellenic Police and in its head Manolis Sfakianakis in recognition of their contribution to promote and protect the rights of children in Greece, from the UNICEF’s Greek National Committee Awards 2014 – 25th celebration to commemorate the Declaration of the Rights of the Child, World Children’s Day. The ceremony took place in the Arcade of Book located in the Arsakeion Mansion in Athens. On 13 February 2006, a Honorary Distinction () was bestowed both on Cyber Crime Unit and in its head in recognition of their valuable social work, from the Ministry of National Education and Religious Affairs, with the Minister Marietta Giannakou presenting the award to Manolis Sfakianakis in a ceremony took place at the General Secretariat for Youth in Athens. Publications Books: Emmanouil Sfakianakis, The internet code (), Athens: All About Internet IKE Publications, 2016, Co-written books together with others (collective work): Emmanouil Sfakianakis, George Floros, Konstantinos Siomos, Psychologist collaborators: Evangelos Makris, Genovefa Christou, Virginia Fyssoun, Addiction to the internet and other high-risk internet behaviour (), Athens: A. A. Livanis Publishing Organization, 2012, , Emmanouil Sfakianakis, Vera Athanasiou, Parents, child, internet (), Athens: Psichogios Publications, 2017, Emmanouil Sfakianakis, Vera Athanasiou, The Ben and tablet-trap (), Athens: Psichogios Publications, 2017, Emmanouil Sfakianakis, Vera Athanasiou, The Ben and magic screen (), Athens: Psichogios Publications, 2017, Emmanouil Sfakianakis, Vera Athanasiou, The Ben and cyberbullying (), Athens: Psichogios Publications, 2017, Emmanouil Sfakianakis, Ioannis Makripoulias, The keys of internet (), Athens: All About Internet IKE Publications, 2016, Criticism and controversies Supreme Civil and Criminal Court of Greece 305/2019 irrevocable decision The 305/2019 irrevocable decision (Greek: αμετάκλητο βούλευμα) along with its officially documenting release published on its website on 5 November 2018 issued on 15 February 2019 of Z’ Criminal Department of Supreme Civil and Criminal Court of Greece (Greek: Άρειος Πάγος), Manolis Sfakianakis was sent to trial accused of having infringed the penal law concerning arbitrary abuse of power charged of a felony offense (Greek: ) during was Head of the Cyber Crime Unit of Hellenic Police. The 305/2019 irrevocable decision on 5 November 2018 taken by judges Aggeliki Aliferopoulou (), vice-president of Supreme Civil and Criminal Court of Greece Dimitrios Georgas () and Grigorios Koutsokostas (), with the presence of Deputy Public Prosecutor of Supreme Civil and Criminal Court of Greece Panagiotis Pagou () and Secretary Aikaterinis Anagnostopoulou (). The Supreme Civil and Criminal Court of Greece dismissed the appeal of previous referral order of the Judicial Council of Appeal Court Judges (Greek: Συμβούλιο Εφετών) because the court record of the irrevocable decision along by the findings of the investigation showed that Manolis Sfakianakis was implicated in the case appearing to be known, suggested methods, and told journalist Sokratis Giolias 37-year-old about an investigation to be conducted by the Cyber Crime Unit in the coming days at his residence in order to evade criminal charges and remove away his computer devices and any evidence which will be objectively identify who is administrator of Troktiko online journal. Also according to the court record, Manolis Sfakianakis personally had informed Sokratis Giolias for a contract killing against him, based on phone call he had with him on 16 July 2010. The journalist of Proto Thema newspaper, director of Theme 98.9 FM radio station and Troktiko online journal, Sokratis Giolias, was assassinated on 19 July 2010 by gunmen were waiting in ambush shooting him while was outside of his residence in Athens. On 27 July 2010 in a written statement sent to the Ta Nea newspaper, the terror group "Sect of Revolutionaries" assumed responsibility for the murder. The case remains unsolved with the murderers are still being unknown. Due to typical bureaucratic reasons and no for substantive, Manolis Sfakianakis wasn't brought to trial for arbitrary abuse of power offense and obstruction of justice, because this criminal charge has removed from the new Criminal Code of Greece on 11 June 2019 with the laws 4619/2019 and 4620/2019 when the Supreme Civil and Criminal Court of Greece had decided before the effective date of the two laws on 1 July 2019, and before the General Election held on 7 July 2019. Manolis Sfakianakis himself denies any wrongdoing and argued that he does not have any relation with the murder. Lawyer Christos Mylonopoulos, legal representative of Manolis Sfakianakis, he stated, "The criminal act in which the principal (Manolis Sfakianakis) is accused of committing a crime it can no longer be tried in court and no longer bring criminal charges because this now has been removed from new Criminal Code of Greece". Transfer to a different term of office (2016–2017) Under the chairmanship of the Chief of the Hellenic Police Lieutenant General Konstantinos Tsouvalas with the participation of two army Lieutenant Generals, has met the Supreme Council of Assessments of the Hellenic Police Headquarters () considered the (22) Major Generals of the Hellenic Police. The 2016 Annual Assessments for Police Major Generals announced in a press release issued by the Hellenic Police Headquarters on 17 February 2016 by which Manolis Sfakianakis retained to the rank of Major General and he will be assigned Head of the Administrative Support and Human Resources Branch at Hellenic Police’s Headquarters (). Later the same day following a press release for 2016 Annual Assessments for Brigadier Generals issued by the Hellenic Police’s Headquarters on 18 February 2016 on which contained an official announcement stated "it has accepted the action of Manolis Sfakianakis and has been cancelled his assignment at the Administrative Support and Human Resources Branch". The transfer has attracted intense criticism from a large part of people when was reported in the media with written comments made mostly on Facebook and Twitter. The Members of Parliament Eleftherios Avgenakis (New Democracy), Odysseas Konstantinopoulos (Democratic Coalition), and Georgios Amyras (To Potami) they had submitted a joint question. On 18 February 2016 the Hellenic Police Headquarters sent a written announcement () to Cyber Crime Division, forwarded to a number of officials, in which the 17 February 2016 assignment for Manolis Sfakianakis is rescinded and it has approved the assignment of him to the term of office of Assistant to the Chief of Staff of the Hellenic Police Headquarters (HQ) where it will also be assigned an oversight of the Cyber Crime Division. The vacant position of the Head of the Cyber Crime Division was filled when Police Colonel George Papaprodromou () assigned for the post assumed from 27 May 2016 to 2 November 2018. On the 21 February 2016 Eleftheros Typos newspaper’s article, M. Sfakianakis he commented "If I had not the confidence and support by Prime Minister (Alexis Tsipras), and people’s love, they would have already demobilized me. I am an "army staff" but not a "soldier". I am Police Major General, as they done for other colleagues, they should have been notified me in the event of transfer assignment. If they were to do that, I would accept directly the assignment of the Head of the Administrative Support and Human Resources Branch. Why did they call me now? They told me you will be assigned Assistant to the Chief of Staff, and I accepted it without any protest". In relating to the transfer of Manolis Sfakianakis, stating for the then Ministry of the Interior and Administrative Reconstruction, Alternate Minister of Citizen Protection Nikos Toskas has said "Manolis Sfakianakis has already unduly overdone the tolerance of Hellenic Police with his excessive amount of public media appearances on his own behalf took advantage of his post. In an ordered system it should have no room for conceits. Personal capacity assignments do not exist. No-one can occupy a post over many consecutive years, who not even acquire any ownership right by reason of its prolonged stay at the post. Everyone must respect the rules. But there are other young officers with exceptional knowledge who do not seem and do not have access to mass media. It would be right to offer an access to them. I have absolutely no interest in what political party choice someone is, my concern for police force is that the work must be done. This is not about one person while it should be showcased and valued to all the police members. I am not concerned in the fact if he is accepted the post offered to him. This is standard practice, whether someone agree it or not. M. Sfakianakis has the rank of Police Major General and he could not hold the same post during all ranks because that only post he knows. It is a subject as conformed by Greek police force's hierocracy. Some supporters closest to him in the terms of political expediency they pointing out the unworthily person rather than the worthy […]", revealed Ministry's dissatisfaction with Manolis Sfakianakis’ acts accusing him of obstruction of justice violating the Code of Ethics following he had interfered his remit breached rules. As a result of disciplinary proceedings will be transferred to other position while he maintains his Major General rank. Nikos Toskas also concluded stating that "there would be no further tolerance in this regard while his self-promotional actions on duty having worsened, the red lines have already been crossed and there conceits cannot be dealt with". Minister of Interior and Administrative Reconstruction Panagiotis Kouroumblis has stated "I think given to the issue a high emphasis (than what it really is). The assessments it does not detract Mr. Sfakianakis, and because he shall not cease to be an evolving human, to pursue further next year, such a transfer must first be done. The Hellenic Police has its own logic (definite framework of progress by default). If he declines the post offered to him, because he still wants to deal with this objective (cyber crime law enforcement) this is a different matter. Because we have to understand for this particular sector (police force) nobody can be remain permanently in an assignment". Nonetheless, a one-year later on 21 January 2017, the Supreme Council of Assessments of the Hellenic Police Headquarters at its 2017 Annual Assessments for Major Generals, it decided eventually to discontinue Manolis Sfakianakis from his active duty when it promoted him to the rank of Lieutenant General in the form of retirement. References 1963 births People from Chania Living people
254562
https://en.wikipedia.org/wiki/Presentation%20Manager
Presentation Manager
Presentation Manager (PM) is the graphical user interface (GUI) that IBM and Microsoft introduced in version 1.1 of their operating system OS/2 in late 1988. History Microsoft began developing a graphic user interface (GUI) in 1981. After it persuaded IBM that the latter also needed a GUI, Presentation Manager (PM; codenamed Winthorn) was co-developed by Microsoft and IBM's Hursley Lab in 1987-1988. It was a cross between Microsoft Windows and IBM's mainframe graphical system (GDDM). Like Windows, it was message based and many of the messages were even identical, but there were a number of significant differences as well. Although Presentation Manager was designed to be very similar to the upcoming Windows 2.0 from the user's point of view, and Presentation Manager application structure was nearly identical to Windows application structure, source compatibility with Windows was not an objective. For Microsoft, the development of Presentation Manager was an opportunity to clean up some of the design mistakes of Windows. The two companies stated that Presentation Manager and Windows 2.0 would remain almost identical. One of the most significant differences between Windows and PM was the coordinate system. While in Windows the 0,0 coordinate was located in the upper left corner, in PM it was in the lower left corner. Another difference was that all drawing operations went to the Device Context (DC) in Windows. PM also used DCs but there was an added level of abstraction called Presentation Space (PS). OS/2 also had more powerful drawing functions in its Graphics Programming Interface (GPI). Some of the GPI concepts (like viewing transforms) were later incorporated into Windows NT. The OS/2 programming model was thought to be cleaner, since there was no need to explicitly export the window procedure, no WinMain, and no non-standard function prologs and epilogs. Parting ways One of the most-cited reasons for the IBM-Microsoft split was the divergence of the APIs between Presentation Manager and Windows, which was probably driven by IBM. Initially, Presentation Manager was based on Windows GUI code, and often had developments performed in advance, like the support for proportional fonts (which appeared in Windows only in 1990). One of the divergences regarded the position of coordinate (0,0), which was at the top-left in Windows, but at bottom-left (as in Cartesian coordinates) in Presentation Manager. In practice it became impossible to recompile a GUI program to run on the other system; an automated source code conversion tool was promised at some point. Both companies were hoping that at some point users would migrate to OS/2. In 1990, version 3.0 of Windows was beginning to sell in volume, and Microsoft began to lose interest in OS/2 especially since, even earlier, market interest in OS/2 was always much smaller than in Windows. The companies parted ways, and IBM took over all of subsequent development. Microsoft took OS/2 3.0, which it renamed Windows NT; as such, it inherited certain characteristics of Presentation Manager. IBM continued to develop Presentation Manager. In subsequent versions of OS/2, and derivatives such as ArcaOS, it was used as a base for the object-oriented interface Workplace Shell. There is a significant integration of the GUI layer with the rest of the system, but it is still possible to run certain parts of OS/2 from a text-console or X window, and it is possible to boot OS/2 into a command-line environment without Presentation Manager (e.g. using TSHELL ). Presentation Manager for Unix In the late 1980s, Hewlett-Packard and Microsoft collaborated on an implementation of Presentation Manager for Unix systems running the X11 windowing system. The port consisted of two separate pieces of software - a toolkit, window manager and style guide named CXI (Common X Interface) and an implementation of the Presentation Manager API for Unix named PM/X. Both CXI and PM/X were submitted to the Open Software Foundation for consideration as OSF's new user interface standard for Unix, which eventually became Motif. OSF ultimately selected CXI, but used Digital Equipment Corporation's XUI API instead of PM/X. Microsoft and HP continued the development of PM/X for some time after the release of Motif, but it was ultimately abandoned. Technical details PM follows the Common User Access interface conventions. It also supports mouse chording for copying and pasting text. An important problem was that of the single input queue: a non-responsive application could block the processing of user-interface messages, thus freezing the graphical interface. This problem has been solved in Windows NT, where such an application would just become a dead rectangle on the screen; in later versions it became possible to move or hide it. In OS/2 it was solved in a FixPack, using a timer to determine when an application was not responding to events. See also Program Manager References External links OS/2 Graphical user interfaces
6665926
https://en.wikipedia.org/wiki/Data%20Design%20System
Data Design System
Data Design System AS (DDS) supplies the construction industry with software tools for building information modelling (BIM). The company was founded in 1984 in Stavanger, Norway. Since then more than 13,500 licenses of DDS-CAD have been installed, mainly in Europe. DDS is an active member of buildingSMART. DDS has its headquarters at Stavanger, Norway. Other locations include Oslo and Bergen (both in Norway). DDS has several subsidiaries, among them DDS Building Innovation AS and Data Design System GmbH. DDS is a company in the Nemetschek Group. The main product line is tools for building services/MEP (mechanical, electrical, plumbing) engineers. Data Design System GmbH distributes DDS-CAD MEP, mainly in the continental Europe. Data Design System GmbH has its main office in Ascheberg, Germany. DDS Building Innovation AS develops software tools for design and production of timber-frame buildings. The main product is DDS-CAD Architect & Construction. DDS Building Innovation is located at the DDS headquarters at Stavanger. See also Comparison of CAD editors for AEC Comparison of CAD, CAM and CAE file viewers References External links Building information modeling Computer-aided design software Computer-aided design software for Windows Software companies of Norway Software companies established in 1984
40606253
https://en.wikipedia.org/wiki/LibGDX
LibGDX
libGDX is a free and open-source game-development application framework written in the Java programming language with some C and C++ components for performance dependent code. It allows for the development of desktop and mobile games by using the same code base. It is cross-platform, supporting Windows, Linux, Mac OS X, Android, iOS, BlackBerry and web browsers with WebGL support. History In the middle of 2009 Mario Zechner, the creator of libGDX, wanted to write Android games and started developing a framework called AFX (Android Effects) for this. When he found that deploying the changes from Desktop to Android device was cumbersome, he modified AFX to work on the Desktop as well, making it easier to test programs. This was the first step toward the game framework later known as libGDX. In March 2010 Zechner decided to open-source AFX, hosting it on Google Code under the GNU Lesser General Public License (LGPL). However, at the time he stated that "It's not the intention of the framework to be used for creating desktop games anyway", intending the framework to primarily target Android. In April, it got its first contributor. When Zechner created a Box2D JNI wrapper, this attracted more users and contributors because physics games were popular at the time. Many of the issues with Android were resolved because of this. Because many users suggested switching to a different license due to LGPL not being suitable for Android, libGDX changed its license to the Apache License 2.0 in July 2010, making it possible to use the framework in closed-source commercial games. The same month its phpBB forum was launched. Due to issues with Java Sound the audio desktop implementation switched to OpenAL in January 2011. Development of a small image manipulation library called Gdx2D was finished as well, which depends on the open source STB library. The rest of 2011 was spent adding a UI library and working on the basics of a 3D API. At the start of 2012 Zechner created a small helper library called gdx-jnigen for easing the development of JNI bindings. This made it possible for the gdx-audio and gdx-freetype extensions to be developed over the following months. Inspired by Google's PlayN cross-platform game development framework that used Google Web Toolkit (GWT) to compile Java to JavaScript code, Zechner wrote an HTML/JavaScript backend over the course of several weeks, which allowed libGDX applications to be run in any browser with WebGL support. After Google abandoned PlayN, it was maintained by Michael Bayne, who added iOS support to it. libGDX used parts of this work for its own MonoTouch-based backend. In August 2012 the project switched its version control system from Subversion to Git, moving from Google Code to GitHub. However, the issue tracker and wiki remained on Google Code for another year. The main build system was also changed to Maven, making it easier for developers with different IDEs to work together. Because of issues with the MonoTouch iOS backend Niklas Thernig wrote a RoboVM backend for libGDX in March 2013, which was integrated into the project in September. From March to May 2013 a new 3D API was developed as well and integrated into the library. In June 2013 the project's website was redone, now featuring a gallery where users can submit their games created with libGDX. more than 3000 games have been submitted. After the source code migration to GitHub the year before, in September 2013 the issue tracker and wiki were also moved there from Google Code. The same month the build and dependency management system was switched from Maven to Gradle. After a cleanup phase in the first months of 2014 libGDX version 1.0 was released on 20 April, more than four years after the start of the project. In 2014 libGDX was one of the annual Duke's Choice Award winners, being chosen for its focus on platform-independence. In April 2016 it was announced that libGDX would switch to Intel's Multi-OS Engine on the iOS backend after the discontinuation of RoboVM. With the release of libGDX 1.9.3 on 16 May 2016 Multi-OS is provided as an alternative, while by default the library uses its own fork of the open source version of RoboVM. libGDX Jam From 18 December 2015 to 18 January 2016 a libGDX game jam was organized together with RoboVM, itch.io and Robotality. From initially 180 theme suggestions "Life in space" was chosen as the jam's main theme, and 83 games were created over the course of the competition. Release versions Architecture libGDX allows the developer to write, test, and debug their application on their own desktop PC and use the same code on Android. It abstracts away the differences between a common Windows/Linux application and an Android application. The usual development cycle consists of staying on the desktop PC as much as possible while periodically verifying that the project still works on Android. Its main goal is to provide total compatibility between desktop and mobile devices, the main difference being speed and processing power. Backends The library transparently uses platform-specific code through various backends to access the capabilities of the host platform. Most of the time the developer does not have to write platform-specific code, except for starter classes (also called launchers) that require different setup depending on the backend. On the desktop the Lightweight Java Game Library (LWJGL) is used. There is also an experimental JGLFW backend that is not being continued anymore. In Version 1.8 a new backend was introduced, intended to replace the older backend. The HTML5 backend uses the Google Web Toolkit (GWT) for compiling the Java to JavaScript code, which is then run in a normal browser environment. libGDX provides several implementations of standard APIs that are not directly supported there, most notably reflection. The Android backend runs Java code compiled for Android with the Android SDK. For iOS a custom fork of RoboVM is used to compile Java to native iOS instructions. Intel's Multi-OS Engine has been provided as an alternative since the discontinuation of RoboVM. Other JVM languages While libGDX is written primarily in Java, the compiled bytecode is language-independent, allowing many other JVM languages to directly use the library. The documentation specifically states the interoperability with Ceylon, Clojure, Kotlin, Jython, JRuby and Scala. Extensions Several official and third-party extensions exist that add additional functionality to the library. gdxAI An artificial intelligence (AI) framework that was split from the main library with version 1.4.1 in October 2014 and moved into its own repository. While it was initially made for libGDX, it can be used with other frameworks as well. The project focuses on AI useful for games, among them pathfinding, decision making and movement. gdx freetype Can be used to render FreeType fonts at run time instead of using static bitmap images, which do not scale as well. Box2D A wrapper for the Box2D physics library was introduced in 2010 and moved to an extension with the 1.0 release. packr A helper tool that bundles a custom JRE with the application so end users do not have to have their own one installed. Notable games Drag Racing: Streets Ingress (before it was relaunched as Ingress Prime) Slay the Spire Delver HOPLITE Deep Town Sandship Unciv Mindustry Space Haven Pathway Halfway Riiablo Mirage Realms Raindancer PokeMMO Zombie Age 3 Epic Heroes War Shattered Pixel Dungeon Hair Dash Antiyoy See also References External links Audio libraries Cross-platform free software Free 3D graphics software Free computer libraries Free game engines Free graphics software Free software programmed in Java (programming language) Graphics libraries Java APIs Java (programming language) libraries OpenGL Software using the Apache license Video game development software Video game engines
10848863
https://en.wikipedia.org/wiki/Customer%20edge%20router
Customer edge router
The customer edge router (CE) is the router at the customer premises that is connected to the provider edge router of a service provider IP/MPLS network. The CE router peers with the provider edge router (PE) and exchanges routes with the corresponding VRF inside the PE. The routing protocol used could be static or dynamic (an interior gateway protocol like OSPF or an exterior gateway protocol like BGP). The customer edge router can either be owned by the customer or service provider. See also Provider edge router Provider router References Routers (computing) MPLS networking
2350183
https://en.wikipedia.org/wiki/Rocks%27n%27Diamonds
Rocks'n'Diamonds
Rocks'n'Diamonds is a puzzle video game with elements of Boulder Dash, Supaplex, Emerald Mine, Solomon's Key, and Sokoban clone. It is free software under the GNU GPL-2.0-only license created by Artsoft Entertainment and designed by Holger Schemel. Gameplay Rocks'n'Diamonds features gameplay elements from all the games mentioned above, usually in the form of sub-games, although levels can feature combinations of elements from any of the games mentioned above, as well as new ones. There are currently more than 50,000 levels available on Rocks'n'Diamonds-related pages. Rocks'n'Diamonds can also read native levels from the games Emerald Mine, Supaplex and Diamond Caves II. Boulder Dash The Boulder Dash game involves collecting a set number of diamonds after which an exit door opens through which the player can enter the next level. The levels are filled with dirt which can be dug simply by moving through it. This creates empty space. Diamonds can be collected by moving into them. Rocks and diamonds can rest on dirt, walls (only indestructible and slippery/magic walls), or other rocks and gems, but once these are removed (or the space next to them), they will fall down. This is sometimes useful, as the player can drop things on top of monsters (butterflies and fireflies) roaming the levels. Some destroyed monsters drop gems necessary to achieve the necessary number to complete the level. Amoeba can be dangerous and unpredictable, but also occasionally useful for several reasons of too few diamonds, or if you need to destroy a monster. Supaplex and Emerald Mine The Supaplex and Emerald Mine games can be considered clones of Boulder Dash themselves, although they have added elements, including explosives, acid, locked doors with matched keys, and more. Rocks'n'Diamonds provides a download of approximately 50000 Emerald Mine levels, however, it can only play a very limited amount of them under its primary engine; because of this, it utilizes an older version of Emerald Mine for X11 to play those levels. Sokoban The Sokoban game is a puzzle, and can be considered to be viewed from above, as its elements are not affected by gravity. This game lets the player push giant light bulbs into sockets in order to finish the level. Level editor The game includes a level editor that lets the player create custom levels. The game also supports custom graphics, as well as whole new level elements which can be created without any programming knowledge. Development With its release in 1995, it is one of the earliest games available for Linux, and it also runs on MS-DOS, Microsoft Windows, Unix, and Mac OS X. The MS-DOS version is based on code by Guido Schulz. The native Emerald Mine game engine is based on an older version of Emerald Mine for X11 by David Tritscher, which is used to read and play all native Emerald Mine levels. Since 2014 the source code is available via a Git repository. The game was later ported to various platforms, for instance in 2014 to the OpenPandora handheld. Reviews The game has been praised and noted by Free Software Magazine and Linux Magazine. See also List of open-source video games References External links Forum Official forum Rocks'n'Diamonds page A complete overview 1995 video games DOS games Puzzle video games Linux games MacOS games MorphOS games Rocks-and-diamonds games Windows games AmigaOS 4 games Multiplayer and single-player video games Free software programmed in C Software using the GPL license Open-source video games Video games developed in Germany Unix games
63510049
https://en.wikipedia.org/wiki/Recovery%20Toolbox
Recovery Toolbox
Recovery Toolbox is a collection of utilities and online services for recovering corrupted files, file formats, and repairing passwords for various programs. Free utilities Recovery Toolbox for CD Free Free tool for repairing data from compact discs that have been physically damaged (scratched, exposed to liquids, etc.) or are affected by system errors. Recovery Toolbox File Undelete Free Free tool for repairing deleted files in the Windows operating system. It supports the NTFS file system, but it doesn't support recovery of data stored on high performance disks (SSD). Shareware utilities Recovery Toolbox for Flash Repairs deleted files from various storage media with FAT file systems, including SD, CF, MMC and other memory cards, smart media cards, IBM MicroDrives, Flash and USB drives, digital cameras, and floppy disks. Recovery Toolbox for RAR Repairs data from damaged RAR archives. Supports all existing RAR formats and compression ratios, password-protected archives, and archives stored on corrupted media. Recovery Toolbox for Excel Repairs corrupted Microsoft Excel files. Supports most tabular data, styles, fonts, sheets, formulas, functions, cell colors, borders, etc. Recovery Toolbox for Outlook Fixes errors encountered when working with Microsoft Outlook and repairs data such as emails, contacts, reminders, meetings, tasks, notes, calendar entries, logs, and other data from corrupted PST and OST files. Web services In addition to working as a specialized installed tool, Recovery Toolbox supports data repair via web services such as: Adobe file formats: PDF documents and presentations (Adobe Acrobat/PDF Reader), AI image files (Adobe Illustrator), and PSD project files (Adobe Photoshop) Microsoft Office file formats: Excel spreadsheets, Word documents (including RTF), PowerPoint presentations, Project files; and email formats: PST and OST (Outlook), and DBX(Outlook Express) Other image file formats: DWG (AutoCAD) and CDR (CorelDraw) Database formats: ACCDB and MDB (Access), DBF (FoxPro/Clipper/dBase, etc.) About developer Recovery Toolbox, Inc. is an IT company which has been developing software for repairing damaged files since 2003. To date, solutions have been developed to repair corrupted files of more than 30 different types, including extensions created with Microsoft Office software (such as Outlook and Excel) and data stored on various drives (hard disk drives, portable devices, CD/DVD, etc.). References 2003 software Utility software Computer memory Windows software System software Recovery Data recovery Data recovery software Data recovery companies
63033632
https://en.wikipedia.org/wiki/Protocol%20Wars
Protocol Wars
A long-running debate in computer science known as the Protocol Wars occurred from the 1970s to the 1990s when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust computer networks. This culminated in the Internet–OSI Standards War in the late 1980s and early 1990s, which was ultimately "won" by the Internet by the mid-1990s and has since resulted in most other protocols disappearing. The pioneers of packet switching technology built computer networks to research data communications in the early 1970s. As public data networks emerged in the mid to late 1970s, the debate about interface standards was described as a "battle for access standards". Several proprietary standards emerged and European postal, telegraph and telephone services (PTTs) developed the X.25 standard in 1976, which was adopted on public networks providing global coverage. The United States Department of Defense (DoD) developed and tested TCP/IP during the 1970s in collaboration with universities and researchers in the United States, United Kingdom and France. IPv4 was released in 1981 and the DoD made it standard for all military computer networking. By 1984, an international reference model known as the OSI model had been agreed on, with which TCP/IP was not compatible. Many governments in Europe – particularly France, West Germany, the United Kingdom and the European Economic Community – and also the United States Department of Commerce mandated compliance with the OSI model and the US Department of Defense planned to transition away from TCP/IP to OSI. Meanwhile, the development of a complete Internet protocol suite by 1989, and partnerships with the telecommunication and computer industry to incorporate TCP/IP software into various operating systems laid the foundation for the widespread adoption of TCP/IP as a comprehensive protocol suite. While OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking and as the core component of the emerging Internet. Early computer networking Pioneers vs PTTs Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. In the 1960s, Paul Baran in the United States and Donald Davies in the United Kingdom found it hard to convince incumbent telephone companies of the merits of their ideas for the design of computer data networks. AT&T in the United States and the postal, telegraph and telephone service (PTT) in the United Kingdom, the General Post Office (GPO), had a monopoly on communications infrastructure. They believed speech traffic would continue to dominate data traffic and believed in traditional telegraphic techniques. Baran published a series of briefings and papers about dividing information into "message blocks" and sending it over distributed networks between 1960 and 1964. Davies conceived of and named the concept of packet switching in data communication networks in 1965. He proposed a national commercial data network in the UK and built the local-area NPL network to demonstrate and research his ideas. Larry Roberts met Roger Scantlebury, a member of Donald Davies' team, at the 1967 Symposium on Operating Systems Principles. Roberts incorporated Davies' ideas about packet switching into the ARPANET design, a project established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense (DoD) to enable resource sharing between computers. Roberts approached AT&T in the early 1970s about taking over the ARPANET to offer a public packet switched service but they declined. Louis Pouzin faced opposition from France's PTT, but his ideas to facilitate internetworking caught the attention of the ARPANET developers in the early 1970s. PTTs were operating on the basis of circuit switching, the alternatives to which are message switching or packet switching. Datagrams vs virtual circuits Packet switching can be based on either a connectionless or connection-oriented mode, which are completely different approaches to data communications. A connectionless datagram service transports packets independently of other packets whereas a connection-oriented virtual circuit transports packets between terminals in sequence. Both concepts have advantages and disadvanges. Virtual circuits emulate physical circuits, which are well understood in the telecoms industry and mimics the operation of their equipment. Once set up, the data packets do not have to contain any routing information, which can simplify the packet structure and improve channel efficiency. The routers are also faster as the route setup is only done once, from then on packets are simply forwarded down the existing link. One downside is that the equipment has to be more complex as the routing information has to be stored for the length of the connection. Another disadvantage is that the virtual connection may take some time to set up end-to-end, and for small messages, this time may be significant. Datagram services include the information needed for looking up the next link in the network in every packet. In these systems, the routers examine each packet as it arrives, looks at the routing information within them, and decides where to route it. These have the advantage that there is no inherent overhead in setting up the circuit, meaning that a single packet can be transmitted as efficiently as a long stream. They also generally make routing around problems simpler as only the single routing table needs to be updated, not the routing information for every virtual circuit. This also requires less memory, as only one route needs to be stored for any destination, not one per virtual circuit. On the downside, they need to examine every packet, which makes them (theoretically) slower. One of the first uses of the term 'protocol' in a data-communication context occurs in a memorandum entitled A Protocol for Use in the NPL Data Communications Network written by Roger Scantlebury and Keith Bartlett in April 1967. Building on Donald Davies’ simulation of datagram networks, Louis Pouzin built CYCLADES to research internetworking concepts. He first demonstrated the network, which used unreliable datagrams in the packet-switched network and virtual circuits for the transport layer, in 1973. Under the heading "Datagrams versus VC's", Larry Roberts wrote "As part of the continuing evolution of packet switching, controversial issues are sure to arise." NCP and TCP vs X.25 On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol which provided a reliable packet delivery procedure via an Interface Message Processor. The Network Control Program (NCP) for the ARPANET was first implemented in 1970. The designers of the NCP envisioned a hierarchy of protocols to enable Telnet and File Transfer Protocol (FTP) functions across the ARPANET. Networking research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the first version of the Transmission Control Program (TCP) in 1974. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December as a monolithic (single layer) design. The following year, testing began through concurrent implementations at Stanford, BBN and University College London, but it was not installed on the ARPANET at this time. A protocol for internetworking was also being pursued by the International Networking Working Group, consisting of ARPANET researchers, members of the French CYCLADES project and the British team working on the NPL network and European Informatics Network. They agreed an end-to-end protocol that was presented to the CCITT in 1975 but was not adopted by the CCITT or by the ARPANET. The fourth biennial Data Communications Symposium that year included talks from Donald Davies, Louis Pouzin, Derek Barber, and Ira Cotten about the current state of packet-switched networking. The conference was covered by Computerworld magazine which ran a story on the "battle for access standards" between datagrams and virtual circuits, as well as a piece describing the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". At the conference, Louis Pouzin said pressure from European PTTs forced the Canadian DATAPAC network to change from a datagram to virtual circuit approach. After leaving ARPA in 1973, Larry Roberts joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized. European PTTs, particularly the work of Rémi Després, contributed to the development of this standard, X.25, which was agreed by the CCITT in 1976. Roberts promoted this approach over the ARPANET model which he described as "oversold" in 1978. Vint Cerf said Roberts turned down his suggestion to use TCP when he built Telenet, saying that people would only buy virtual circuits and he could not sell datagrams. Common host protocol vs translating between protocols At the National Physical Laboratory in the United Kingdom, internetworking research considered the "basic dilemma" involved in connecting networks; that is, a common host protocol would require restructuring the existing networks. The NPL network connected with the European Informatics Network (EIN) by translating between two different host protocols, that is, using a gateway. Concurrently, the NPL connection to the UK Experimental Packet Switched Service used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient. DoD model vs X.25 and proprietary standards The UK Coloured Book protocols gained some acceptance internationally as the first complete X.25 standard. First defined in 1975, they gave the UK "several years lead over other countries" but were intended as "interim standards" until international agreement was reached. The design of the Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. In version 3 of TCP, written in 1978, the Transmission Control Program was split into two distinct protocols, the Internet Protocol (IP) as connectionless layer and the Transmission Control Protocol (TCP) as a reliable connection-oriented service. Originally referred to as IP/TCP, version 4 was installed on SATNET in 1982 and on the ARPANET in January 1983 after the DoD made it standard for all military computer networking. This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model, DARPA model, or ARPANET model. Computer manufacturers developed proprietary protocol suites such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet, and Xerox's Xerox Network Systems (XNS). During the late 1970s and most of the 1980s, there remained a lack of open networking options. Therefore, proprietary standards, particularly SNA and DECnet, as well as some variants of XNS (e.g., Novell NetWare and Banyan VINES), were commonly used on private networks, becoming somewhat "de facto" industry standards. The X.25 standard gained political support in European countries and from the European Economic Community (EEC). For example, the European Informatics Network, which was based on datagrams was replaced with Euronet based on X.25. Peter Kirstein wrote that European networks tended to be short-term projects with smaller numbers of computers and users. As a result, the European networking activities did not lead to any strong standards except X.25, which became the main European data protocol for fifteen to twenty years. Kirstein said his group at University College London was widely involved, partly because they were one of the most expert, and partly to try to ensure that the British activities, such as the JANET NRS, did not diverge too far from the US. The growth of public data networks based on the X.25 protocol suite through the 1980s, notably the International Packet Switched Service, created a global infrastructure for commercial data transport. In the US, the National Science Foundation (NSF), NASA, and the United States Department of Energy (DoE) all built networks variously based on the DoD model, DECnet, and IP over X.25. OSI reference model The Experimental Packet Switched System in the UK in the mid-late 1970s identified the need for defining higher-level protocols. The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Hubert Zimmermann, and Charles Bachman as chairman, played a key role in the development of the Open Systems Interconnections reference model. Beginning in 1978, this international work led to a draft proposal in 1980 and the final OSI model was published in 1984. The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined. Internet protocol suite Until NSF took over in the 1980s, TCP/IP was not even a candidate for universal adoption. The implementation of the Domain Name System in 1985 and the development of a complete protocol suite by 1989, as outlined in and , laid the foundation for growth of TCP/IP as a comprehensive protocol suite, which became known as the Internet protocol suite. ARPANET was shut down in 1990 and responsibilities for governance shifted away from the DoD. Internet–OSI Standards War The early research and development of standards for data networks and protocols culminated in the Internet–OSI Standards War in the late 1980s and early 1990s. Engineers, organizations and nations became polarized over the issue of which standard would result in the best and most robust computer networks. Both standards are open and non-proprietary in addition to being incompatible, although "openness" may have worked against OSI while being successfully employed by Internet advocates. Philosophical and cultural aspects Historian Andrew Russell writes that Internet engineers such as Danny Cohen and Jon Postel were accustomed to continual experimentation in a fluid organizational setting through which they developed TCP/IP, and viewed OSI committees as overly bureaucratic and out of touch with existing networks and computers. This alienated the Internet community from the OSI model. During a dispute within the Internet community, Vint Cerf performed a striptease in a three-piece suit at the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything"; according to Cerf, his intention was to reiterate that a goal of the Internet Architecture Board was to run IP on every underlying transmission medium. Cerf said the social culture (group dynamics) that first evolved during the work on the ARPANET was as important as the technical developments in enabling the governance of the Internet to adapt to the scale and challenges involved as it grew. François Flückiger writes that "firms that win the Internet market, like Cisco, are small. Simply, they possess the Internet culture, are interested in it and, notably, participate in IETF." Technical aspects Russell notes that Cohen, Postel and others were frustrated with technical aspects of OSI. The model defined seven layers of computer communications, from physical media in layer 1 to applications in layer 7, which was more layers than the network engineering community had anticipated. In 1987, Postel said that although they envisaged a hierarchy of protocols in the early 1970s, "If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required." Strict layering in OSI was viewed by Internet advocates as inefficient and did not allow trade-offs ("layer violation") to improve performance. The OSI model allowed what some saw as too many transport protocols (five compared with two for TCP/IP). Furthermore, OSI allowed for both the datagram and the virtual circuit approach at the network layer, which are non-interoperable options. Practical and commercial aspects Beginning in the early 1980s, ARPA pursued commercial partnerships with the telecommunication and computer industry which enabled the adoption of TCP/IP. In Europe, CERN purchased UNIX machines with TCP/IP for their intranet between 1984 and 1988. Nonetheless, Paul Bryant, the UK representative on the EARN Board of Directors, said "By the time JNT [the UK academic network JANET] came along [in 1984] we could demonstrate X25 ... and we firmly believed that BT [British Telecom] would provide us with the network infrastructure and we could do away with leased lines and experimental work. If we had gone with ARPA then we would not have expected to be able to use a public service. In retrospect the flaws in that argument are clear but not at the time. Although we were fairly proud of what we were doing, I don't think it was national pride or anti USA that drove us, it was a belief that we were doing the right thing. It was the latter that translated to religious dogma." JANET was a free X.25-based network for academic use, not research; experiments and other protocols were forbidden. The ARPA Internet was still a research project that did not allow commercial traffic or for-profit services. The NSFNET initiated operations in 1986 using TCP/IP but, two years later, the US Department of Commerce mandated compliance with the OSI standard and the Department of Defense planned to transition away from TCP/IP to OSI. Some European countries and the European Economic Community endorsed OSI. They founded RARE and associated national network operators (such as DFN, SURFnet, SWITCH) to promote OSI protocols, and restricted funding for non-OSI compliant protocols. However, in 1988, EUnet, the European UNIX Network, announced its conversion to Internet technology. By 1989, the OSI advocate Brian Carpenter made a speech at a technical conference entitled "Is OSI Too Late?" which received a standing ovation. OSI was formally defined, but vendor products from computer manufactures and network services from PTTs were still to be developed. TCP/IP by comparison was not an official standard (it was defined in unofficial RFCs) but UNIX workstations with both Ethernet and TCP/IP included had been available since 1983. At the beginning of the 1990s, academic institutions and organizations in some European countries had adopted TCP/IP. In February 1990 RARE stated "without putting into question its OSI policy, recognizes the TCP/IP family of protocols as an open multivendor suite, well adapted to scientific and technical applications." In the same month, CERN established a transatlantic TCP/IP link with Cornell University in the United States. Conversely, starting in August 1990, the NSFNET backbone supported the OSI Connectionless Network Protocol (CLNP) in addition to TCP/IP. CLNP was demonstrated in production on NSFNET in April 1991, and OSI demonstrations, including interconnections between U.S. and European sites, were planned at the InterOp '91 conference in October that year. At the Rutherford Appleton Laboratory (RAL) in the United Kingdom in January 1991, DECnet represented 75% of traffic, attributed to Ethernet between VAXs. IP was the second most popular set of protocols with 20% of traffic, attributed to UNIX machines for which "IP is the natural choice". In the Central Computing Department Newsletter, Paul Bryant, Head of Communications and Small Systems at RAL, wrote "Experience has shown that IP systems are very easy to mount and use, in contrast to such systems as SNA and to a lesser extent X.25 and Coloured Books where the systems are rather more complex." The author continued "The principal network within the USA for academic traffic is now based on IP. IP has recently become popular within Europe for inter-site traffic and there are moves to try and coordinate this activity. With the emergence of such a large combined USA/Europe network there are great attractions for UK users to have good access to it. This can be achieved by gatewaying Coloured Book protocols to IP or by allowing IP to penetrate the UK. Gateways are well known to be a cause of loss of quality and frustration. Allowing IP to penetrate may well upset the networking strategy of the UK." Similar views were shared by others at the time, including Louis Pouzin. At CERN, François Flückiger reflected "The technology is simple, efficient, is integrated into UNIX-type operating systems and costs nothing for the users’ computers. The first companies that commercialise routers, such as Cisco, seem healthy and supply good products. Above all, the technology used for local campus networks and research centres can also be used to interconnect remote centers in a simple way." Beginning in March 1991 the JANET IP Service (JIPS) was set up as a pilot project to host IP traffic on the existing network. Within eight months the IP traffic had exceeded the levels of X.25 traffic, and the IP support became official in November. Also in 1991, Dai Davies introduced Internet technology over X.25 into the pan-European NREN, EuropaNet, although he experienced personal opposition to this approach. The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. OSI usage on the NSFNET remained low when compared to TCP/IP. There was some talk of moving JANET to OSI protocols in the 1990s, but this never happened. The X.25 service was closed in August 1997. The invention of the World Wide Web in 1989 by Tim Berners-Lee at CERN, as an application on the Internet, brought many social and commercial uses to what was previously a network of networks for academic and research institutions. The Web began to enter everyday use in 1993–4. The U.S. National Institute for Standards and Technology proposed in 1994 that GOSIP should incorporate TCP/IP and drop the requirement for compliance with OSI, which was adopted into Federal Information Processing Standards the following year. NSFNET had altered its policies to allow commercial traffic in 1991, and was shut down in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic. Subsequently, the Internet backbone was provided by commercial Internet service providers and Internet connectivity became ubiquitous. Legacy As the Internet evolved and expanded exponentially, an enhanced protocol was developed, IPv6, to address IPv4 address exhaustion. In the 21st century, the Internet of things is leading to the connection of new types of devices to the Internet, bringing reality to Cerf's vision of "IP on Everything". Nonetheless, issues with IPv6 remain and alternatives have been proposed such as Recursive Network Architecture, and Recursive InterNetwork Architecture. The seven-layer OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach. Other standards such as X.25 and SNA remain niche players. See also History of the Internet Notes References Sources Further reading External links Internet protocols History of the Internet Communications protocols Network protocols OSI model X.25
25433524
https://en.wikipedia.org/wiki/PrimoPDF
PrimoPDF
PrimoPDF is a freeware program that creates a PDF file from Microsoft Windows documents. It works as a virtual printer. It does not present the user with advertisements. But does utilize the OpenCandy Adware program and per its terms of service, may use OpenCandy to recommend other software to the user. PrimoPDF is developed by the same company that develops the commercial Nitro PDF software. PrimoPDF requires the Microsoft .NET Framework 2.0. When the program runs, it tries to download automatic updates from www.primopdf.com each time it prints. This feature can be disabled within the program settings. It uses the Ghostscript file format converter and RedMon printer redirection software. According to its documentation, PrimoPDF has the following features: Consistent PDF creation. Use PrimoPDF's creation profiles to produce the same kind of PDF file every time. Profiles include Screen, eBook, Print, Prepress, and Custom. Append PDF files. Combine each newly created PDF file into the one PDF. Secure PDF. Protect and encrypt your information with strong password-based PDF security. PDF metadata. Set the document properties information fields, -- including author, title, subject, and keywords—to index your PDF files and make them easier to search. PDF versions. Create different version PDF files: 1.2, 1.3, 1.4, and 1.5 The software includes ads in the program, so cannot be classified as adware, but uses OpenCandy to supply those ads. OpenCandy includes spyware. See also List of PDF software List of virtual printer software References Seth Rosenblatt, CNET editors' review, October 8, 2009, CNet Davey Winder, PrimoPDF Review, 15 Dec 2006, PC Pro External links Freeware PDF software
1386692
https://en.wikipedia.org/wiki/IWork
IWork
iWork is an office suite of applications created by Apple Inc. for its macOS and iOS operating systems, and also available cross-platform through the iCloud website. It includes the presentation application Keynote, the word processing and desktop publishing application Pages, and the spreadsheet application Numbers. Apple's design goals in creating iWork have been to allow Mac users to easily create attractive documents and spreadsheets, making use of macOS's extensive font library, integrated spelling checker, sophisticated graphics APIs and its AppleScript automation framework. The equivalent Microsoft Office applications to Pages, Numbers, and Keynote are Word, Excel, and PowerPoint, respectively. Although Microsoft Office applications cannot open iWork documents, iWork applications can export documents from their native formats (.pages, .numbers, .key) to Microsoft Office formats (.docx, .xlsx, .pptx, etc.) as well as to PDF files. The oldest application in iWork is Keynote, first released as a standalone application in 2003 for use by Steve Jobs in his presentations. Steve Jobs announced Keynote saying "It's for when your presentation really matters". Pages was released with the first iWork bundle in 2005; Numbers was added in 2007 with the release of iWork '08. The next release, iWork '09, also included access to iWork.com, a beta service that allowed users to upload and share documents, now integrated into Apple's iCloud service. A version of iWork for iOS was released in 2010 with the first iPad, and the apps have been regularly updated since, including the addition of iPhone support. In 2013, Apple launched iWork web apps in iCloud; even years later, however, their functionality is somewhat limited compared to equivalents on the desktop. iWork was initially sold as a suite for $79, then later at $19.99 per app on OS X and $9.99 per app on iOS. Apple announced in October 2013 that all iOS and OS X devices purchased onwards, whether new or refurbished, were eligible for a free download of all three iWork apps. iWork for iCloud, which also incorporates a document hosting service, is free to all holders of an iCloud account. iWork was released as freeware for macOS and iOS in April 2017. In September 2016, Apple announced that the real-time collaboration feature would be available for all iWork apps.[32] History The first version of iWork, iWork '05, was announced on January 11, 2005 at the Macworld Conference & Expo and made available on January 22 in the United States and worldwide on January 29. iWork '05 comprised two applications: Keynote 2, a presentation creation program, and Pages, a word processor. iWork '05 was sold for US$79. A 30-day trial was also made available for download on Apple's website. Originally IGG Software held the rights to the name iWork. While iWork was billed by Apple as "a successor to AppleWorks", it does not replicate AppleWorks's database and drawing tools. However, iWork integrates with existing applications from Apple's iLife suite through the Media Browser, which allows users to drag and drop music from iTunes, movies from iMovie, and photos from iPhoto and Aperture directly into iWork documents. iWork '06 was released on January 10, 2006 and contained updated versions of both Keynote and Pages. Both programs were released as universal binaries for the first time, allowing them to run natively on both PowerPC processors and the Intel processors used in the new iMac desktop computers and MacBook Pro notebooks which had been announced on the same day as the new iWork suite. The next version of the suite, iWork '08, was announced and released on August 7, 2007 at a special media event at Apple's campus in Cupertino, California. iWork '08, like previous updates, contained updated versions of Keynote and Pages. A new spreadsheet application, Numbers, was also introduced. Numbers differed from other spreadsheet applications, including Microsoft Excel, in that it allowed users to create documents containing multiple spreadsheets on a flexible canvas using a number of built-in templates. iWork '09, was announced on January 6, 2009 and released the same day. It contains updated versions of all three applications in the suite. iWork '09 also included access to a beta version of the iWork.com service, which allowed users to share documents online until that service was decommissioned at the end of July 2012. Users of iWork '09 could upload a document directly from Pages, Keynote, or Numbers and invite others to view it online. Viewers could write notes and comments in the document, and download a copy in iWork, Microsoft Office, or PDF formats. iWork '09 was also released with the Mac App Store on January 6, 2011 at $19.99 per application, and received regular updates after this point, including links to iCloud and a high-DPI version designed to match Apple's MacBook Pro with Retina Display. On January 27, 2010, Apple announced iWork for iPad, to be available as three separate $9.99 applications from the App Store. This version has also received regular updates including a version for pocket iPhone and iPod touch devices, and an update to take advantage of Retina Display devices and the larger screens of recent iPhones. On October 22, 2013, Apple announced an overhaul of the iWork software for both the Mac and iOS. Both suites were made available via the respective App Stores. The update is free for current iWork owners and was also made available free of charge for anyone purchasing an OS X or iOS device after October 1, 2013. Any user activating the newly free iWork apps on a qualifying device can download the same apps on another iOS or OS X device logged into the same App Store account. The new OS X versions have been criticized for losing features such as multiple selection, linked text boxes, bookmarks, 2-up page views, mail merge, searchable comments, ability to read/export RTF files, default zoom and page count, integration with AppleScript. Apple has provided a road-map for feature re-introduction, stating that it hopes to reintroduce some missing features within the next six months. As of April 1, 2014 a few features—e.g., the ability to set the default zoom—had been reintroduced, though scores had not. In October 2014, writer John Gruber commented on the numerous font handling problems that "it's like we're back in 1990 again." Due to using a completely new file format that can work across macOS, Windows, and in most web browsers by using the online iCloud web apps, means versions of iWork beginning with iWork 13 and later do not open or allow editing of documents created in versions prior to iWork 09, with users who attempt to open older iWork files being given a pop-up in the new iWork 13 app versions telling them to use the previous iWork 09 (which users may or may not have on their machine) in order to open and edit such files. Accordingly, the current version for OS X (which was initially only compatible with OS X Mavericks 10.9 onwards) moves any previously installed iWork 09 apps to an iWork 09 folder on the users machine (in /Applications/iWork '09/), as a work-around to allow users continued use of the earlier suite in order to open and edit older iWork documents locally on their machine. In October 2015, Apple released an update to mitigate this issue, allowing users to open documents saved in iWork '06 and iWork '08 formats in the latest version of Pages. In 2016, Apple announced that the real-time collaboration feature would be available for all iWork apps, instead of being constrained to using iWork for iCloud. The feature is comparable to Google Docs. Versions Major releases Updates iWork '09 received several updates: iWork 9.0.3 DVD (for Mac OS X 10.5.6 "Leopard" or newer; released August 26, 2010) iWork 9.0.4 (for Mac OS X 10.5.6 "Leopard" or newer; released August 26, 2010) iWork 9.1 (for Mac OS X 10.6.6 "Snow Leopard" or newer; released July 20, 2011) iWork 9.3 (for Mac OS X 10.7.4 "Lion" or newer; released December 4, 2012) The Mac App Store version of iWork was updated on October 15, 2015 for 10.10 "Yosemite" or newer. It is the final release to support 10.10 "Yosemite" and 10.11 "El Capitan". Keynote 6.6, Pages 5.6 and Numbers 3.6 are included. iWork received a major update again on March 28, 2019 with Keynote 9.0, Pages 8.0 and Numbers 6.0. Components Common components Products in the iWork suite share a number of components, largely as a result of sharing underlying code from the Cocoa and similar shared application programming interfaces (APIs). Among these are the well known universal multilingual spell checker, which can also be found in products like Safari and Mail. Grammar checking, find and replace, style and color pickers are similar examples of design features found throughout the Apple application space. Moreover, the applications in the iWork suite also share a new model of the document. In most document-based applications there is a particular data type which forms the basis of the application's view of the world, for instance, in word processors the text is the first-class citizen of the application, while in a spreadsheet it is the cells in the table. Other objects, images or charts for instance, are managed by being attached to, or referenced to, the underlying primary data type. In iWork, all of the applications share a common underlying document format, the "canvas", a generic container type that provides layout and storage mechanisms. Each application then adds its own custom objects and places them on the canvas. Pages, for instance, conventionally opens with a single large text object on the canvas. To the user it appears to be a typical word processor, but they can grab the corner and re-size it as in a page layout application. In Numbers, one initially sees a grid of cells like any other spreadsheet, but the user is free to size it smaller than the canvas, and then add multiple grids, charts or even drawings to the same canvas. The difference is subtle, as many of these features are also implemented in more traditional programs like Microsoft Excel. However, the difference in UI can be significant. In Excel, for instance, charts are stored as part of a sheet, and can be moved inadvertently through natural user actions. In Numbers, charts are, like everything else, part of the canvas, and changes to the sheet(s) are normally independent. The iWork model bears some resemblance to the earlier Apple effort, OpenDoc. OpenDoc also used a single underlying document engine, along with a single on-disk format. Unlike iWork, however, OpenDoc also used a single application, in which various editors could be invoked. For instance, one could open a generic document, start a spreadsheet editor, then add a spreadsheet. iWork lacks this level of flexibility in editing terms, but maintains it in layout. Desktop applications Pages Pages is a word processing application. Besides basic word processing functionality, Pages includes templates designed by Apple to allow users to create various types of documents, including newsletters, invitations, stationery, and résumés, along with a number of education-themed templates for students and teachers, such as reports and outlines. Pages 5, a complete redesign, removed more than 100 of the features of Pages 4.x, including bookmarks, mail merge, linked text boxes, multiple section capability, ability to set default zoom. It has been slowly adding some back in subsequent 5.x releases (default zoom, for example, can now be set, but still no linked text boxes, multiple select, mail merge, book marks, and more than 90 other features that were present in version 4.3). Along with Keynote and Numbers, Pages integrates with Apple's iLife suite. Using the Media Browser, users can drag and drop movies, photos and music directly into documents within the Pages application. A Full Screen view hides the menubar and toolbars, and an outline mode allows users to quickly create outlines which can easily be rearranged by dragging and dropping, as well as collapsed and expanded. Pages includes support for entering complex equations with MathType 6 and for reference citing using EndNote X2. The Pages application can open and edit Microsoft Word documents (including DOC and Office Open XML files), and plain text documents. Pages 5 can no longer read or export rich text format documents. Pages can also export documents in the DOC, PDF, and ePub formats. It cannot read or write OpenDocument file formats. As a word-processing application targeted towards creating attractive documents for a range of applications such as lesson plans and newsletters, Pages competes with Microsoft Word, Microsoft Publisher (never ported to OS X), Apple's own free e-book and PDF authoring application, iBooks Author, and Adobe's professional-market desktop publishing application InDesign. Keynote Keynote is an application used to create and play presentations. Its features are comparable to those of Microsoft PowerPoint, though Keynote contains several unique features. Keynote, like Pages and Numbers, integrates with the iLife application suite. Users can drag and drop media from iMovie, iTunes, iPhoto and Aperture directly into Keynote presentations using the Media Browser. Keynote contains a number of templates, transitions, and effects. Magic Move allows users to apply simple transitions to automatically animate images and text that are repeated on consecutive slides. Apple formerly released a Keynote Remote application for iOS that let users view slides and presenter notes and control Keynote presentations with an iPhone or iPod Touch over a Wi-Fi network, but that functionality has been rolled into subsequent releases of the main Keynote iOS application. Keynote supports a number of file formats. By default, presentations are saved as .key files. Keynote can open and edit Microsoft PowerPoint (.ppt) files. In addition, presentations can be exported as Microsoft PowerPoint files, QuickTime movies (which are also playable on iPod and iPhone), HTML files, and PDF files. Presentations can also be sent directly to iDVD, iTunes, GarageBand, iWeb, and to YouTube. The Keynote 09 file format is not backward compatible; .key files saved with Keynote '09 cannot be opened with earlier versions of Keynote. Numbers Numbers is a spreadsheet application that was added to the iWork suite in 2007 with the release of iWork '08. Numbers, like Microsoft Excel and other spreadsheet applications, lets users organize data into tables, perform calculations with formulas, and create charts and graphs using data entered into the spreadsheet. Numbers, however, differs from other spreadsheet applications in that it allows users to create multiple tables in a single document on a flexible canvas. Many prebuilt templates, including ones designed for personal finance, education, and business use, are included. Numbers 2.0 was included with iWork '09, with several improvements. Charts that are pasted into Keynote and Pages are automatically updated across documents when they are changed in Numbers. Additionally, Numbers 2 lets users categorize data in tables by column, which can then be collapsed and summarized. Numbers 3.6 added in the ability to open Numbers '08 spreadsheets, among other things. Web services iWork.com iWork.com was a free service that enabled users to share iWork '09 documents online directly from within Pages, Keynote and Numbers. Users could click the iWork.com toolbar icon and login using their Apple ID to upload a document and invite others to view it online. Viewers could leave comments and notes on the document and download a copy in iWork, Microsoft Office, or PDF formats. Document owners could track comments at the iWork.com website. It was released as a public beta on January 6, 2009 at the Macworld Conference & Expo. The iWork.com service provides a web interface for viewing, downloading, and commenting uploaded documents. In contrast to cloud-based office applications such as Google Docs and Office Online, it did not offer editing. iWork.com supported uploading of Pages '09 documents, Keynote '09 presentations, and Numbers '09 spreadsheets. Users could download documents in both Microsoft Office and PDF formats, in addition to their native iWork formats. Uploading documents to iWork.com requires a copy of the iWork '09 software suite and an Apple ID. Viewing, commenting, and downloading require only a web browser and an invitation to view the document. Apple announced that after July 31, 2012, users would be no longer able to publish new documents to iWork.com from any iWork application. Documents stored on iWork.com will not be available to download or view after the shut down date. Instead, users can use iCloud to share documents between their computers (running OS X Mountain Lion) and their iOS devices. Users attempting to access the iWork.com site are re-directed to the Apple homepage. iWork for iCloud During the 2013 Apple Worldwide Developers Conference (WWDC) keynote speech, iWork for iCloud was announced for release at the same time as the next version of the app versions of iWork later in the year. The three apps for both iOS and OS X that form Apple's iWork suite (Pages, Numbers, and Keynote), will be made available on a web interface (named as Pages for iCloud, Numbers for iCloud, and Keynote for iCloud respectively), and accessed via the iCloud website under each users iCloud Apple ID login. They will also sync with the users iOS and OS X versions of the app, should they have them, via their iCloud Apple ID. Later in 2013, iWork for iCloud update added support for real-time collaboration such that the same document could be opened by collaborators at the same time and everyone could make changes simultaneously. It took a few seconds for changes to propagate to other collaborators. This, however, could not work together with iOS and OS X apps, which would cause "out-of-sync" dialogs if editing together with collaborators using iWork for iCloud. In 2016, Apple announced that the real-time collaboration feature would be available to iOS and OS X apps. This allows the user to edit and create documents on the web, using one of the supported browsers; currently Safari, Chrome, and Internet Explorer. It also means that Microsoft Windows users now have access to these native –previously only Apple device– document editing tools, via the web interface. iWork for iCloud has more limited set of features compared to the OS X version of the applications. For instance, the fonts available are more limited and the web version doesn't fully support printing and may display documents created with the support of external plug-ins incorrectly/improperly. In 2014, iWork for iCloud update adds 8 languages, 50 new fonts and improved editing in the cloud-based versions of Pages, Numbers and Keynote. iOS apps On June 7, 2010, while showcasing the new iPhone 4, Apple posted a few screenshots of the device in action and inadvertently showed the possibility of opening an email attachment inside of Keynote, leading some to believe that an iPhone version of the iWork suite would soon be available in the iOS App Store. On June 28, 2010, several websites reported that in an attempt to sell AppleCare for the iPhone 4, several examples of services offered were given including one that read, "Using iWork for iPhone and other Apple-branded iPhone apps." These sites also report that it was quickly removed. On May 31, 2011, Apple released a press statement that iWork would be available on the iOS app store for the iPhone and iPod touch. On September 10, 2013, Apple announced that iWork, iMovie and iPhoto would be available to download for free on new iOS devices activated after September 1. See also List of office suites Comparison of office suites Office Open XML software iCloud References Apple Inc. software iWork.com iWork.com iWork.com MacOS-only software made by Apple Inc. iWork.com iWork.com 2005 software iWork.com
39035285
https://en.wikipedia.org/wiki/Kirby%20High%20School%20%28Arkansas%29
Kirby High School (Arkansas)
Kirby High School is an accredited public high school located in the rural community of Kirby, Arkansas, United States. The school provides comprehensive secondary education for more than 200 students each year in grades 7 through 12. It is one of three public high schools in Pike County, Arkansas and the only high school administered by the Kirby School District. Academics Kirby High School is accredited by the Arkansas Department of Education (ADE). The assumed course of study follows the Smart Core curriculum developed by the ADE. Students complete regular (core and elective) and career focus coursework and exams and may take Advanced Placement (AP) courses and exams with the opportunity to receive college credit. Kirby is a member of the Dawson Education Service Cooperative, which provides career and technical education programs for the area's high school students in multiple school districts. Kirby High School is listed unranked for its academic programs in the Best High Schools 2012 report by U.S. News & World Report. Athletics The Kirby High School mascot and athletic emblem are the Trojans with maroon and gray serving as the school colors. The Kirby Trojans compete in interscholastic activities within the 1A Classification from the 1A-7 West Conference as administered by the Arkansas Activities Association. The Trojans participate in golf (boys/girls), basketball (boys/girls), baseball, and softball. References External links Public high schools in Arkansas Schools in Pike County, Arkansas
18097115
https://en.wikipedia.org/wiki/University%20of%20Central%20Florida%20College%20of%20Engineering%20and%20Computer%20Science
University of Central Florida College of Engineering and Computer Science
The University of Central Florida College of Engineering and Computer Science is an academic college of the University of Central Florida located in Orlando, Florida, United States. The college offers degrees in engineering, computer science and management systems, and houses UCF's Department of Electrical Engineering and Computer Science. The dean of the college is Michael Georgiopoulos, Ph.D. UCF is listed as a university with "very high research activity" by The Carnegie Foundation for the Advancement of Teaching. With an enrollment of over 7,500 undergraduate and graduate students as of Fall 2012, the college is one of the premier engineering schools in the United States. The college is recognized by U.S. News & World Report as one of the nation's best Engineering schools, and as one of the world's best in the ARWU rankings. The university has made noted research contributions to modeling and simulation, digital media, and engineering and computer science. History The College of Engineering and Computer Science was one of the four original academic colleges when UCF began classes in 1968 as Florida Technological University. The State University System of Florida's Board of Regents approved the creation of a college of engineering on September 16, 1966. The college was launched as the university's College of Engineering and Technologies on March 28, 1969. The college saw the completion of a third Engineering Building which was designed in 2000-2002 for the School of EECS with a $15 million allocation from the State of Florida. In 2005, Harris Corporation donated $3 million to the College of Engineering & Computer Science, causing the building's name to be the Harris Corporation Engineering Center. Academics Housing some of the university's showcase majors, the College of Engineering and Computer Science is made up of the following departments: Civil, Environmental, and Construction Engineering (CECE) Computer Science (CS) Electrical and Computer Engineering (ECE) Industrial Engineering & Management Systems (IEMS) Mechanical, Materials, & Aerospace Engineering (MMAE) The college has 13 undergraduate programs, 14 master's degree programs, and eight doctoral degree programs. UCF has been classified as a research university (very high research activity) by the Carnegie Foundation for the Advancement of Teaching. The Graduate School of the College of Engineering & Computer Science is ranked #70 in the Top 100 engineering schools by the U.S. News & World Report. It also featured/features in the Top 100 Engineering/Technology and Computer Sciences schools in the world in the ARWU by Shanghai Jiao Tong University. The college consists of the Department of Electrical Engineering & Computer Science (EECS), the Civil, Environmental, and Construction Engineering (CECE) Department, the Industrial Engineering and Management Systems (IEMS) Department, and the Mechanical, Materials and Aerospace Engineering (MMAE) Department. The ROTC Division consists of the Aerospace Studies Department (Air Force ROTC) and the Military Science Department (Army ROTC). Electrical Engineering and Computer Science The Department of Electrical Engineering and Computer Science was founded in 1999 as a result of the merger of the School of Computer Science with the Department of Electrical and Computer Engineering. In 2005, Computer Science and ECE programs were truly merged (administratively and curriculum-wise) as one unified School of EECS. In the summer of 2010, the School of EECS was renamed to the Department of EECS. The Electrical Engineering and Computer Science Department has had many major accomplishments in their history. The Computer Science Programming Team participates in the Association of Computer Machinery's International Collegiate Programming Contest (ACM-ICPC) and placed 1st in the fall 2016 and 2017 Southeast ACM Regional Programming Contests. Since 1982, the college has placed in the 'Top 3' of the 5 state region. The team finished 13th in the spring 2017 World Finals (Top U.S. team and 2nd in North America). The team improved their ranking in the Spring 2018 World Finals, held in Beijing, China. The team placed 10th overall out of 140 teams, earning a Bronze Medal and North America Champion title. The Programming Team has qualified for and attended 29 Finals since 1983, placing as high as 2nd in the competition. The Computer Science department is also home to the UCF Collegiate Cyber Defense Competition Team. Although the National Collegiate Cyber Defense Competition was established in 2005, it wasn't until January 2013 that UCF entered a team in this competition. In their inaugural season, the UCF CCDC Team finished in 1st Place in the Southeastern Collegiate Cyber Defense Competition and placed 10th at the National Collegiate Cyber Defense Competition. The UCF CCDC Team came back stronger in 2014 and once again won the Southeast Collegiate Cyber Defense Competition and placed 1st at the National Collegiate Cyber Defense Competition to become the reigning National Champions of Cyber Defense. UCF maintained its winning traditions in 2015 finishing in 1st Place at the Southeast Collegiate Cyber Defense Competition and claiming the National Championship at the National Collegiate Cyber Defense competition for the second consecutive year. Rankings The Electrical Engineering graduate program is ranked 57th nationally in the 2010 U.S. News & World Report America's Best Graduate Schools. Computer Science was ranked in the top 100 departments worldwide in 2010 by the Academic Ranking of World Universities. The CS Doctoral Program was ranked in the top 20 programs by NAGPS in 2001. Research Metropolitan Orlando sustains the world's largest recognized cluster of modeling, simulation and training companies. Located directly south of the main campus in the Central Florida Research Park, which is one of the largest research parks in the nation. Providing more than 10,000 jobs, the Research Park is the largest research park in Florida, the fourth largest in the United States by number of companies, and the seventh largest in the United States by number of employees. Collectively, UCF's research centers and the park manage over $5.5 billion in contracts annually. The university fosters partnerships with corporations such as Lockheed Martin, Boeing, and Siemens, and through partnerships with local community colleges. UCF also houses a satellite campus at the Kennedy Space Center in Cape Canaveral, Florida. UCF is also a member of the Florida High Tech Corridor Council. References External links UCF College of Engineering and Computer Science UCF Department of Electrical Engineering and Computer Science UCF Industrial Engineering and Management Systems UCF Materials, Mechanical, and Aerospace Engineering UCF Civil, Environmental, and Construction Engineering University of Central Florida Official Website Engineering And Computer Science Educational institutions established in 1968 1968 establishments in Florida Engineering schools and colleges in the United States Engineering universities and colleges in Florida Computer science departments in the United States Electrical and computer engineering departments
4637118
https://en.wikipedia.org/wiki/Boot%20Camp%20%28software%29
Boot Camp (software)
Boot Camp Assistant is a multi boot utility included with Apple Inc.'s macOS (previously ) that assists users in installing Microsoft Windows operating systems on Intel-based Macintosh computers. The utility guides users through non-destructive disk partitioning (including resizing of an existing HFS+ or APFS partition, if necessary) of their hard disk drive or solid state drive and installation of Windows device drivers for the Apple hardware. The utility also installs a Windows Control Panel applet for selecting the default boot operating system. Initially introduced as an unsupported beta for Mac OS X 10.4 Tiger, the utility was first introduced with Mac OS X 10.5 Leopard and has been included in subsequent versions of the operating system ever since. Previous versions of Boot Camp supported Windows XP and Windows Vista. Boot Camp 4.0 for Mac OS X 10.6 Snow Leopard version 10.6.6 up to Mac OS X 10.8 Mountain Lion version 10.8.2 only supported Windows 7. However, with the release of Boot Camp 5.0 for Mac OS X 10.8 Mountain Lion in version 10.8.3, only 64-bit versions of Windows 7 and Windows 8 are officially supported. Boot Camp 6.0 added support for 64-bit versions of Windows 10. Boot Camp 6.1, available on macOS 10.12 Sierra and later, will only accept new installations of Windows 7 and later; this requirement was upgraded to requiring Windows 10 for macOS 10.14 Mojave. Boot Camp is currently not available on Apple silicon Macs, however, Craig Federighi has stated that there is technically nothing stopping ARM-based versions of Windows 10 and Windows 11 from running on Apple silicon processors; Microsoft would just need to change the licensing policies regarding ARM-based Windows 10 and Windows 11, for currently only OEMs who pre-install Windows 10 and Windows 11 on their products may purchase licenses for it – it is not publicly available to consumers like other versions of Windows 10 and Windows 11. It is already possible to run ARM-based Windows 10 (only Windows Insider builds, as they are the only widely available ARM builds of Windows 10) through the QEMU emulator and Parallels Desktop virtualization software (also supporting Windows 11 and Linux), furthering Federighi's statement. It's currently rumored that Microsoft's exclusivity deal with Qualcomm will expire sometime in early 2022, which would allow Apple and other manufacturers to provide support for Windows on their ARM-based machines if it were not to be renewed.[citation needed] Overview Installation Setting up Windows 10 on a Mac requires an ISO image of Windows 10 provided by Microsoft. Boot Camp combines Windows 10 with install scripts to load hardware drivers for the targeted Mac computer. Boot Camp currently supports Windows 10 on a range of Macs dated mid-2012 or newer. Startup Disk By default, Mac will always boot from the last-used startup disk. Holding down the option key (⌥) at startup brings up the boot manager, which allows the user to choose which operating system to start the device in. When using a non-Apple keyboard, the alt key usually performs the same action. The boot manager can also be launched by holding down the "menu" button on the Apple Remote at startup. On older Macs, its functionality relies on BIOS emulation through EFI and a partition table information synchronization mechanism between GPT and MBR combined. On newer Macs, Boot Camp keeps the hard disk as a GPT so that Windows is installed and booted in UEFI mode. Requirements Mac OS X 10.7 Lion and OS X 10.8 Mountain Lion Apple's Boot Camp system requirements lists the following requirements for Mac OS X Lion and OS X Mountain Lion: 8 GB USB storage device, or external drive formatted as MS-DOS (FAT) for installation of Windows drivers for Mac hardware 20 GB free hard disk space for a first-time installation or 40 GB for an upgrade from a previous version of Windows A full version of one of the following operating systems: Windows 7 Home Premium, Professional, or Ultimate (64-bit editions only) Windows 8 and Windows 8 Professional (64-bit editions only) Windows 10 Home, Pro, Pro for Workstation, Education or Enterprise (64-bit editions only) Mac OS X 10.5 Leopard and Mac OS X 10.6 Snow Leopard Apple lists the following requirements for Mac OS X 10.5 Leopard and Mac OS X 10.6 Snow Leopard: An Intel-based Macintosh computer with the latest firmware (Early Intel-based Macintosh computers require an EFI firmware update for BIOS compatibility). A Mac OS X 10.5 Leopard or Mac OS X 10.6 Snow Leopard installation disc or Mac OS X Disc 1 included with Macs that have Mac OS X 10.5 Leopard or Mac OS X 10.6 Snow Leopard preinstalled; this disc is needed for installation of Windows drivers for Mac hardware 10 GB free hard disk space (16 GB is recommended for Windows 7) A full version of one of the following operating systems: Windows XP Home Edition or Windows XP Professional Edition with Service Pack 2 or higher (32-bit editions only) Windows Vista Home Basic, Home Premium, Business, Enterprise or Ultimate (32-bit and 64-bit editions) Windows 7 Home Premium, Professional, Enterprise or Ultimate (32-bit and 64-bit editions) Supported Macintosh computers with Windows 8 Officially, the earliest Macintosh models that support Windows 8 are the mid-2011 MacBook Air, 13-inch-mid-2011 or 15 and 17-inch-mid-2010 MacBook Pro, mid-2011 Mac Mini, 21-inch-mid-2011 or 27-inch-mid-2010 iMac, and early 2009 Mac Pro. By running the Boot Camp assistant with a compatible version of Microsoft Windows setup disc in the drive and switching to a Windows 8 disc when Mac OS X reboots the machine to begin installing Windows, Windows 8 can be installed on older unsupported hardware. This can also work with Windows 10. Limitations Boot Camp will only help the user partition their disk if they currently have only a primary HFS partition, an EFI System Partition, and a Mac OS X Recovery Partition. Thus, for example, it is not possible to maintain an additional storage partition. A workaround has been discovered that involves interrupting the standard procedure after creating the Boot Camp partition, resizing the primary Mac OS X partition and creating a third partition in the now available space, then continuing with the Windows install. Changes to the partition table after Windows is installed are officially unsupported, but can be achieved with the help of third-party software. Boot Camp does not help users install Linux, and does not provide drivers for it. Most methods for dual-booting with Linux on Mac rely on manual disk partitioning, and the use of an EFI boot manager such as rEFInd. Despite Macs transitioning to Thunderbolt 3 in 2016, Boot Camp does not currently support running Windows with a Thunderbolt 3-powered External GPU (eGPU) unit under macOS High Sierra, macOS Mojave or macOS Catalina. Apple has not publicly commented on why this limitation is in place. Boot Camp version history Boot Camp support software (for Windows) version history See also Parallels Desktop for Mac rEFIt and rEFInd VMware Fusion VirtualBox References External links Boot Camp support page and installation instructions Using the Apple Bluetooth Wireless Keyboard in Boot Camp Troubleshooting Internet Connectivity Issues on Boot Camp with Windows 8 2006 software Apple Inc. file systems Apple Inc. software Boot loaders
4228351
https://en.wikipedia.org/wiki/Background%20Intelligent%20Transfer%20Service
Background Intelligent Transfer Service
Background Intelligent Transfer Service (BITS) is a component of Microsoft Windows XP and later iterations of the operating systems, which facilitates asynchronous, prioritized, and throttled transfer of files between machines using idle network bandwidth. It is most commonly used by recent versions of Windows Update, Microsoft Update, Windows Server Update Services, and System Center Configuration Manager to deliver software updates to clients, Microsoft's anti-virus scanner Microsoft Security Essentials (a later version of Windows Defender) to fetch signature updates, and is also used by Microsoft's instant messaging products to transfer files. BITS is exposed through the Component Object Model (COM). Technology BITS uses idle bandwidth to transfer data. Normally, BITS transfers data in the background, i.e., BITS will only transfer data whenever there is bandwidth which is not being used by other applications. BITS also supports resuming transfers in case of disruptions. BITS version 1.0 supports only downloads. From version 1.5, BITS supports both downloads and uploads. Uploads require the IIS web server, with BITS server extension, on the receiving side. Transfers BITS transfers files on behalf of requesting applications asynchronously, i.e., once an application requests the BITS service for a transfer, it will be free to do any other task, or even terminate. The transfer will continue in the background as long as the network connection is there and the job owner is logged in. BITS jobs do not transfer when the job owner is not signed in. BITS suspends any ongoing transfer when the network connection is lost or the operating system is shut down. It resumes the transfer from where it left off when (the computer is turned on later and) the network connection is restored. BITS supports transfers over SMB, HTTP and HTTPS. Bandwidth BITS attempts to use only spare bandwidth. For example, when applications use 80% of the available bandwidth, BITS will use only the remaining 20%. BITS constantly monitors network traffic for any increase or decrease in network traffic and throttles its own transfers to ensure that other foreground applications (such as a web browser) get the bandwidth they need. Note that BITS does not necessarily measure the actual bandwidth. BITS versions 3.0 and up will use Internet Gateway Device counters, if available, to more accurately calculate available bandwidth. Otherwise, BITS will use the speed as reported by the NIC to calculate bandwidth. This can lead to bandwidth calculation errors, for example when a fast network adapter (10 Mbit/s) is connected to the network via a slow link (56 kbit/s). Jobs BITS uses a queue to manage file transfers. A BITS session has to be started from an application by creating a Job. A job is a container, which has one or more files to transfer. A newly created job is empty. Files must be added, specifying both the source and destination URIs. While a download job can have any number of files, upload jobs can have only one. Properties can be set for individual files. Jobs inherit the security context of the application that creates them. BITS provides API access to control jobs. A job can be programmatically started, stopped, paused, resumed, and queried for status. Before starting a job, a priority has to be set for it to specify when the job is processed relative to other jobs in the transfer queue. By default, all jobs are of Normal priority. Jobs can optionally be set to High, Low, or Foreground priority. Background transfers are optimized by BITS,1 which increases and decreases (or throttles) the rate of transfer based on the amount of idle network bandwidth that is available. If a network application begins to consume more bandwidth, BITS decreases its transfer rate to preserve the user's interactive experience, except for Foreground priority downloads. Scheduling BITS schedules each job to receive only a finite time slice, for which only that job is allowed to transfer, before it is temporarily paused to give another job a chance to transfer. Higher priority jobs get a higher chunk of time slice. BITS uses round-robin scheduling to process jobs in the same priority and to prevent a large transfer job from blocking smaller jobs. When a job is newly created, it is automatically suspended (or paused). It has to be explicitly resumed to be activated. Resuming moves the job to the queued state. On its turn to transfer data, it first connects to the remote server and then starts transferring. After the job's time slice expires, the transfer is temporarily paused, and the job is moved back to the queued state. When the job gets another time slice, it has to connect again before it can transfer. When the job is complete, BITS transfers ownership of the job to the application that created it. BITS includes a built-in mechanism for error handling and recovery attempts. Errors can be either fatal or transient; either moves a job to the respective state. A transient error is a temporary error that resolves itself after some time. For a transient error, BITS waits for some time and then retries. For fatal errors, BITS transfers control of the job to the creating application, with as much information regarding the error as it can provide. Command-line interface tools BITSAdmin command Microsoft provides a BITS Administration Utility (BITSAdmin) command-line utility to manage BITS jobs. The utility is part of Windows Vista and later. It is also available as a part of the Windows XP Service Pack 2 Support Tools or Windows Server 2003 Service Pack 1 Support Tools. Usage example: C:\> bitsadmin /transfer myDownloadJob /download /priority normal https://example.com/file.zip C:\file.zip PowerShell BitsTransfer In Windows 7, the BITSAdmin utility is deprecated in favor of Windows PowerShell cmdlets. The BitsTransfer PowerShell module provides eight cmdlets with which to manage BITS jobs. The following example is the equivalent of the BITSAdmin example above: Start-BitsTransfer -Source "https://example.com/file.zip" -Destination "C:\file.zip" -DisplayName "myDownloadJob" List of non-Microsoft applications that use BITS AppSense – Uses BITS to install Packages on clients. BITS Download Manager – A download manager for Windows that creates BITS Jobs. BITSync – An open source utility that uses BITS to perform file synchronization on Server Message Block network shares. Civilization V – Uses BITS to download mod packages. Endless OS installer for Windows – Uses BITS to download OS images. Eve Online – Uses BITS to download all the patches post-Apocrypha (March 10, 2009). It is also now used in the client repair tool. Some Google services including Chrome, Gears, Pack, Flutter updater and YouTube Uploader used BITS. Firefox (since version 68) for updates. KBOX Systems Management Appliance – A systems management appliance that can use BITS to deliver files to Windows systems. RSS Bandit – Uses BITS to download attachments in web feeds. Oxygen media platform – Uses BITS to distribute Media Content and Software Updates. SharpBITS – An open source download manager for Windows that handles BITS jobs. WinBITS – An open source Downloader for Windows that downloads files by creating BITS Jobs. Novell ZENworks Desktop Management – A systems management software that can use BITS to deliver application files to workstations. Specops Deploy/App – A systems management software that (when available) uses BITS for delivering packages to the clients in the background. See also List of Microsoft Windows components Protocols for file transfer References External links Background Intelligent Transfer Service in Windows Server 2008 Fix Background Intelligent Transfer Service in Windows 10 BITS version history bitsadmin | Microsoft Docs Distributed data storage Network file transfer protocols Hypertext Transfer Protocol clients Windows services Windows administration
45140604
https://en.wikipedia.org/wiki/SpectraLayers
SpectraLayers
SpectraLayers is a digital audio editing software suite published by Steinberg Media Technologies GmbH. It is designed for audio spectrum editing, catering to professional and semi-professional users. It was originally published by Sony Creative Software under the name Sony SpectraLayers, until most of their products were acquired by MAGIX on 24 May 2016. Then in 2019, the software was acquired by Steinberg. Overview SpectraLayers is an advanced audio spectrum editor which allows extraction of sounds, audio restoration and creative sound design through the use of a spectral view. Its interface is similar to an image editor. History SpectraLayers was developed by Divide Frame and published by Sony as SpectraLayers Pro in July 2012. SpectraLayers Pro 2, released in July 2013, improved speed and added features like Spectral Casting/Molding, markers and metadata support, and non-linear scales. SpectraLayers Pro 3, released in January 2015 further improved performance, also adding 24-bit/192kHz audio support, and redesigning many UI components. December 2016 saw the release of SpectraLayers Pro 4, which is the first update released by Magix Software GmbH after its acquisition of SpectraLayers. Up until this point it had been designed and written entirely by Robin Lobel but in 2017, Dr. Bill Evans made additional contributions to the features design and user interface. SpectraLayers Pro 5 was released in May 2018. The new features include a reworked GUI, HD spectrogram, Heal Action and Frequency Repair tool. SpectraLayers Pro 6 was released by Steinberg in July 2019. SpectraLayers Pro 7 introduced processes based on artificial intelligence algorithms. SpectraLayers Pro 8 was released in June 2021. The new key features were Smarter AI, De-Bleed process, AI Reverb Reduction, EQ Matching and Ambience Matching. See also Comparison of digital audio editors References External links Steinberg - SpectraLayers page MAGIX - SpectraLayers page Divide Frame - SpectraLayers page Digital audio workstation software Soundtrack creation software Magix software
2214492
https://en.wikipedia.org/wiki/Skype%20Technologies
Skype Technologies
Skype Technologies S.A.R.L (also known as Skype Software S.A.R.L, Skype Communications S.A.R.L, Skype Inc., and Skype Limited) is a Luxembourgish telecommunications company headquartered in Luxembourg City, Luxembourg, whose chief business is the manufacturing and marketing of the video chat and instant messaging computer software program Skype, and various Internet telephony services associated with it. Microsoft purchased the company in 2011, and it has since then operated as their wholly owned subsidiary; as of 2016, it is operating as part of Microsoft's Office Product Group. The company is a Société à responsabilité limitée, or SARL, equivalent to an American limited liability company. Skype, a voice over IP (VoIP) service, was first released in 2003 as a way to make free computer-to-computer calls, or reduced-rate calls from a computer to telephones. Support for paid services such as calling landline/mobile phones from Skype (formerly called SkypeOut), allowing landline/mobile phones to call Skype (formerly called SkypeIn and now Skype Number), and voice messaging generates the majority of Skype's revenue. eBay acquired Skype Technologies S.A. in September 2005 and in April 2009 announced plans to spin it off in a 2010 initial public offering (IPO). In September 2009, Silver Lake, Andreessen Horowitz and the Canada Pension Plan Investment Board announced the acquisition of 65% of Skype for $1.9 billion from eBay, valuing the business at $2.75 billion. Skype was acquired by Microsoft in May 2011 for $8.5 billion. As of 2010, Skype was available in 27 languages and has 660 million worldwide users, an average of over 100 million active each month, and has faced challenges to its intellectual property amid political concerns by governments wishing to control telecommunications systems within their borders. History Skype was founded in 2003 by Janus Friis from Denmark and Niklas Zennström from Sweden, having its headquarters in Luxembourg with offices now in Tallinn, Tartu, Stockholm, London, Palo Alto, Prague, and Redmond, Washington. The Skype software was originally developed by Estonians Ahti Heinla, Priit Kasesalu and Jaan Tallinn, who together with Janus Friis and Niklas Zennström were also behind the peer-to-peer file sharing software Kazaa. In April 2003, Skype.com and Skype.net domain names were registered. In August 2003, the first public beta version was released. One of the initial names for the project was "Sky peer-to-peer", which was then abbreviated to "Skyper". However, some of the domain names associated with "Skyper" were already taken. Dropping the final "r" left the current title "Skype", for which domain names were available. In September 2005, SkypeOut was banned in China. In October of the same year, eBay purchased Skype for $2.6 billion. (In 2011, the Ars Technica estimated the purchase price at $3.1 billion, not $2.6 billion.) In December 2005, videotelephony was introduced. In April 2006, the number of registered users reached 100 million. In October 2006, Skype 2.0 for Mac was released, the first full release of Skype with video for Macintosh, and in December, Skype announced a new pricing structure, with connection fees for all SkypeOut calls. Skype 3.0 for Windows was released. In 2006, a feature called "Skypecasting" was introduced as a beta. It allowed recordings of Skype voice over IP voice calls and teleconferences to be used as podcasts. Skypecasting remained in beta until it was discontinued on 1 September 2008. Skypecasts hosted public conference calls for up to 100 people at a time. Unlike ordinary Skype p2p conference calls, Skypecasts supported moderation features suitable for panel discussions, lectures, and town hall forums. Skype operated a directory of public Skypecasts. Throughout 2007, updates (3.1, 3.2 and 3.5) added new features including Skype Find, Skype Prime, Send Money (which allowed users to send money via PayPal from one Skype user to another), video in mood, inclusion of video content in chat, call transfer to another person or a group, and auto-redial. Skype 2.7.0.49 (beta) for Mac OS X released adding availability of contacts in the Mac Address Book to the Skype contact list, auto redial, contact groups, public chat creation, and an in-window volume slider in the call window. During several days in August, Skype users were unable to connect to full Skype network in many countries because of a Skype system-wide crash which was the result of exceptional number of logins after a Windows patch reboot ("Patch Tuesday"). In November, there was controversy when it was announced that users allocated certain London 020 numbers (specifically those beginning '7870') would lose them, after negotiations with the provider of this batch of numbers broke down. By early 2008, the tumultuous ownership relations between the founders and eBay had resulted in significant leadership churn, with a succession of Skype presidents including Niklas Zennström, Rajiv Dutta, Alex Kazim, Niklas Zennström (again), and Henry Gomez, all holding that title at various points between 2005 and 2007. The business had failed to meet certain earn-out targets, growth was decelerating, product development had slowed significantly, and in October 2007 eBay took a $1.4 billion 'impairment' on the value of Skype, admitting it had overpaid, and now valuing the company at about $2.7 billion. In October 2008, analysis revealed TOM-skype — the Chinese version of Skype run by TOM Online — sends content of text messages and encryption keys to monitoring servers. Two original founders depart, new CEO and the eBay years For the six months after the departure of Zennström and Friis, Michael van Swaaij led the company as interim CEO, until the appointment of Josh Silverman in February 2008. Silverman was "widely viewed as bringing in stability to Skype after a tumultuous phase that followed the exit of the two Skype co-founders." Under Silverman's two-and-a-half year tenure, the company focused its product efforts around video calling, ubiquity (gaining high penetration on smartphones, PCs, TVs, and consumer-electronic devices), building tailored offerings for enterprise customers, and diversifying revenue through subscriptions, premium accounts and advertising. In advancing this strategy, Skype released many new products, substantially revamping its flagship Windows software (3.8 -> 4.0), and its macOS and Linux software; while introducing new software products for smartphones, and consumer electronics. In 2009, Skype 4.0 was released, featuring full-screen high-quality video calling. the Linux client was updated, and an iPhone application was launched which topped the charts with over 1M downloads in its first two days. Skype also announced the launch of its software for the Android platform. During this period Skype also discontinued lesser-used services such as support for the "Skype Me" presence indicator, which meant that a user was interested in receiving Skype calls from a non-contact. Skype also discontinued its SkypeCast service without explanation and added internal monthly and daily usage caps on their SkypeOut subscriptions, which had been advertised as "unlimited". Skype also discontinued its "dragonfly" feature, a community-generated yellow-pages product, and other features which were deemed to be under-performing or a distraction to management. Many users and observers had commented on the high rate of dropped calls and the difficulty reconnecting dropped calls. Updates including versions for the Sony PSP hand-held gaming system, version 2.0 for Linux with support for video-conferencing. As part of its efforts to diversify revenues, Skype launched in April 2008 Skype for SIP, a service aimed at business users. At that time around 35 per cent of Skype's users were business users. In targeted premium products to consumers, Skype launched new monthly premium subscriptions products in May 2010. Marketing efforts were also re-vamped, with a particular focus on innovative partnerships with TV broadcasters to integrate Skype into their programming. The Oprah Winfrey Show began using Skype regularly to host video calls between Oprah and her viewers at home, culminating in a show dedicated exclusively to the wonders of Skype ("where the Skype are You", aired first in May 2009). Skype also became commonly used by network news stations around the world, as a cost-effective replacement for sending satellite trucks, and enabling fast response from citizen journalists. Skype was also integrated into scripted TV programming, such as Californication; and in the seventh season of the U.S. syndicated version of the British game show Who Wants To Be a Millionaire in a new Ask the Expert video chat lifeline. These efforts led to accelerating growth of Skype's Connected Users and Paying Users and by 2009 Skype was adding about 380,000 new users each day. By the end of 2009 Skype was generating around US$740 million in revenue. In January 2010, Josh Silverman was recognized for his accomplishments at Skype by being named First Runner Up in the TechCrunch "Crunchies" award for "Best CEO", beaten only by Mark Pincus of Zynga. Independence and Silver Lake Building on the revitalization which had begun in 2008, eBay announced, in April 2009, plans to spin off Skype through an initial public offering in 2010. In August, Joltid, Ltd. filed a motion with the U.S. Securities and Exchange Commission, seeking to terminate a licensing agreement with eBay which allowed eBay (and therefore Skype) to use the peer-to-peer communications technology on which Skype is based. If successful, this could have caused a shutdown of Skype as it existed, and made an IPO challenging to execute. In September, eBay announced the sale of 65 per cent of Skype to a consortium of Index Ventures and Silver Lake Partners. Early in September, Skype shut down the Extras developer program. In November, Skype settled the Joltid litigation and acquired all Skype technology in exchange for equity in the company and eBay completed the sale of 70% of Skype to a consortium comprising Silver Lake Partners, Joltid, CPPIB, and Andreessen Horowitz for approximately $2 billion, valuing the entire business at US$2.75 billion. In May 2010, Skype 5.0 beta was released, with a capacity to support group video calls involving up to four participants. Also in May, Skype released an updated client for the Apple iPhone that allowed Skype calls to be made over a 3G network. Originally, a 3G call subscription plan was to be instituted in 2011, but Skype eventually dropped the plan. Rounding out its ubiquity push, Skype also announced deep integration of Skype software into the IP-connected TVs from Panasonic, Samsung and Sony. In June 2010, following the rapid departure of Daniel Berg, the chief technology officer, and then chief development officer Madhu Yarlagadda, Mark Gillett, an operating partner at Silver Lake Partners, assumed the role of chief development officer, taking responsibility for development of Skype's client software, cloud services and product management following a period of several months working closely with Joshua Silverman to drive the transformation of the business, and the acceleration of cross platform and mobile product delivery. With its newfound independence and under ownership, Skype's growth accelerated and by 2010, Telegeography estimated that Skype accounted for 25 per cent of the world's international calling minutes. According to their research, the overall international calling market grew a tepid 5 to 6 per cent annually in 2010. "Skype, however, has seen a huge uptick in growth, particularly in the last two years." On 9 August 2010, Skype filed with the United States Securities and Exchange Commission (SEC) to raise up to US$100 million in an initial public offering. Sources reported that the company expected to raise at least US$1 billion. In October 2010, Skype announced the appointment of Tony Bates as CEO; Bates was a senior VP at Cisco and was responsible for its multibillion-dollar enterprise, commercial and small business division. On 14 October 2010, Skype 5.0 for Windows was released with a number of improvements and feature additions, including a Facebook tab to allow users to SMS, chat with, or call their Facebook friends via Skype from the News Feed. The "Search for Skype Users" option was omitted from this version. On 14 January 2011, Skype acquired Qik, a mobile video-sharing platform. In March 2011, Skype named Jonathan Chadwick as its new chief financial officer and confirmed that Mark Gillett would serve full-time as chief development and operating officer following the departure of chief financial and administrative officer Adrian Dillon. Microsoft subsidiary (2011–present) On 10 May 2011, Microsoft announced it had agreed to acquire Skype for $8.5 billion. This marked a 300% increase in value for the company in the three years since the eBay write-down in October 2007. This constitutes Microsoft's second largest acquisition to date. It was announced that Skype will become a division within Microsoft, with Skype's former CEO Tony Bates —now its president— reporting to Microsoft CEO Steve Ballmer. The price Microsoft agreed to pay for the company is 32 times Skype's operating profits. According to the Financial Times this raises fears of a new "tech bubble". Ars Technica and the BBC have questioned the value for Microsoft in the purchase. Microsoft's acquisition of Skype got EU approval on 7 October 2011. In October 2012, one year after the closure of the Skype acquisition, the newly formed Skype Division took responsibility for Microsoft's other VoIP and Unified Communications product Microsoft Lync. On 11 July 2013, Microsoft's then-CEO Steve Ballmer announced a reorganization of Microsoft along functional lines and with four engineering groups each led by a senior leader. Microsoft's new Applications and Services Group, led by executive vice president Qi Lu was to include Skype along with Bing and Microsoft Office.  Following a period where the strategy for the Skype business as a part of the broader Microsoft portfolios including Office 365 was established, and Skype's share of the international communications market rose to 36 percent (over 214 billion minutes), Mark Gillett announced that he would be leaving Microsoft to return to Silver Lake. In September 2016, after Qi Lu stepped down from Microsoft, Skype and Office became part of the Office Product Group, led by Rajesh Jha. The other part of the former Apps and Services Group (which includes Bing) became part of a new AI and Research Group. Palo Alto office The Skype North American headquarters was opened in early 2013 after the design was completed by San Francisco architect firm Blitz. Located in Palo Alto, California, the space is designed to encourage interaction and spontaneity, while also introducing a sense of humor into the workplace. Fake lawn, cushions that look like boulders, open spaces, and high ceilings accommodate 250 employees in a building. The Palo Alto headquarters also houses a games room complete with a pool table and table football machine. The office's silver LEED Silver certification means that it received between 50 and 59 points for its environmentally friendly construction. Legal issues P2P licensing dispute lawsuit and IPO In its 2008 Annual Report, eBay admitted to an ongoing dispute between it and Joltid Ltd. over the licensing of its peer-to-peer "Global Index" technology in its application. It announced that it terminated a standstill agreement, allowing either company to sue. On 1 April 2009, eBay filed with a UK court to settle the legal dispute. A few days later, eBay announced the planning of a public stock offering in 2010 to spin off Skype as a separate, publicly owned company. Some media outlets characterized the proposed sale and ongoing provision of Skype as being under threat because of the concurrent dispute. On 1 September 2009, a group of investors led by Silver Lake bought 65% of Skype for $1.91 billion. This prompted Joltid to countersue eBay on 17 September 2009. Both settled the simultaneous suits in November, resulting in Joltid's part-ownership of the newly formed Skype Limited. The final holding share were with Silver Lake reducing their share to 56%, Joltid entered at 14%, and eBay retained 30%. Other intellectual property challenges Streamcast lawsuit StreamCast Networks filed a complaint in U.S. District Court in Los Angeles, alleging theft of its peer-to-peer technology and violation of the "Racketeer Influenced and Corrupt Organizations" statute. The complaint, titled; StreamCast Networks Inc v. Skype Technologies, S.A., was filed on 20 January 2006 in Federal Court in the Central District of California and assigned Case Number, 2:2006cv00391. The $4.1 billion lawsuit did not name Skype's parent company, eBay, when initially filed. Streamcast's lawsuit was subsequently amended on 22 May 2006 to include eBay and 21 other party defendants. In its lawsuit, Streamcast sought a worldwide injunction on the sale and marketing of eBay's Skype Internet voice communication products, as well as billions of dollars in unspecified damages. The lawsuit was finally dismissed in a decision filed on 19 January 2007. IDT lawsuit On 1 June 2006, Net2Phone (the Internet telephone unit of IDT Corp.) filed a lawsuit against eBay and Skype accusing the unit of infringing , which was granted in 2000. The lawsuit was settled between the parties in 2010. GPL lawsuit In July 2007 Skype was found to be guilty by a German court of violating the GNU General Public License in one of its for-sale products, the SMC WSKP100. Political issues China 2005 Skype was one of many companies (others include AOL, Microsoft, Yahoo, Cisco) which cooperate with the Chinese government in implementing a system of Internet censorship in mainland China. Critics of such policies argue that it is wrong for companies to profit from censorship and restrictions on freedom of the press and freedom of speech. Human rights advocates such as Human Rights Watch and media groups such as Reporters Without Borders speculate that if companies stopped contributing to the authorities' censorship efforts, the government could be pressured to change. Niklas Zennström, then chief executive of Skype, told reporters that its joint venture partner in China was operated in compliance with domestic law. "TOM Online had implemented a text filter, which is what everyone else in that market is doing," said Zennström. "Those are the regulations," he said. "I may like or not like the laws and regulations to operate businesses in the UK or Germany or the US, but if I do business there I choose to comply with those laws and regulations. I can try to lobby to change them, but I need to comply with them. China in that way is not different." France 2005 In September 2005, the French Ministry of Research, acting on advice from the General Secretariat of National Defence, issued an official disapproval of the use of Skype in public research and higher education; some services are interpreting this decision as an outright ban. The exact reasons for the decision were not given. United States, CALEA 2006 In May 2006, the Federal Communications Commission (FCC) successfully applied the Communications Assistance for Law Enforcement Act to allow wiretapping on digital phone networks. Skype was not compliant with the Act, and stated that it does not plan to comply. United States, Transparency and PRISM 2013 Starting in November 2010, Skype has been participating in a U.S. Government spy program titled PRISM, allowing the National Security Agency (NSA) unfettered access to people's chats and video and audio communications. However, it was not until February 2011 that the company was formally served with a directive to comply, signed by the attorney general. In March 2013, the company issued transparency reports. confirming long held beliefs that Skype responds to requests from governments. Microsoft's General Counsel wrote to the US Attorney General, outlining the company's concerns in July 2013 in a blog from Brad Smith including a copy of the letter to the US attorney. See also Features of Skype References External links Skype.com Skype learning: pros and cons 2005 mergers and acquisitions 2011 mergers and acquisitions EBay Microsoft acquisitions Microsoft subsidiaries Microsoft divisions Skype Skype people Software companies of Luxembourg VoIP companies 2003 establishments in Luxembourg Software companies established in 2003 Companies based in Luxembourg City
55726571
https://en.wikipedia.org/wiki/ZipBooks
ZipBooks
ZipBooks is an accounting software company based in American Fork, Utah. The cloud-based software is an accounting and bookkeeping tool that helps business owners process credit cards and send and finance and invoices, among other features. History ZipBooks was founded by Tim Chaves in June 2015, backed by venture capital firm Peak Ventures. The company secured an additional $2 million of funding in July 2016, and in 2017 it was awarded a $100,000 economic grant by the Utah Governor's Office of Economic Development Technology Commercialization and Innovation Program. Products ZipBooks' core modules are invoicing, transactions, bills, reporting, time tracking, contacts, and payroll. Accrual accounting was added in 2017. The application is available on G Suite, iOS, Slack, and as a web application. Reception Computerworld compared ZipBooks favorably with other accounting software. PC Magazine praised its user experience, but stated it lacked "a lot of features that competing sites offer". See also Comparison of accounting software Double-entry bookkeeping system Software as a service Time tracking software Web application References Accounting software Cloud applications Cross-platform software Proprietary software Software companies based in Utah Software companies established in 2015 Time-tracking software Web applications Software companies of the United States 2017 establishments in Utah
5406957
https://en.wikipedia.org/wiki/OmniOutliner
OmniOutliner
OmniOutliner is commercial outlining software for macOS and iOS produced by The Omni Group. OmniOutliner has most of the features of a conventional outliner, allowing the user to create nested lists of topics for almost any purpose, but has additional features extending its functionality beyond simple outlining. Recent versions of the software are universal binaries. OmniOutliner received a special mention in the 2005 Apple Design Awards, and Macworld gave the "Professional" version its highest rating. History OmniOutliner 4 for Mac was released on January 15, 2014, with a modern redesign with new sidebars and a dynamic inspector, text zooming, smart match, date pasting logic, and more. OmniOutliner 4.5 for Mac was released on March 2, 2016, a major update with more control printing selected rows, filtering indentation, export with tab separated indentation, and more. OmniOutliner 5 for Mac was released on April 5, 2017. OmniOutliner 3 for iOS was released in 2018. Feature set Outlining OmniOutliner provides the basic functions of an outliner, structuring content in a hierarchy of rows indented under one another to show the relationships between different items. The user can expand or collapse outline levels for easy viewing, sort topics, promote or demote the level of a topic, or "hoist" one level so only that topic is shown. It supports considerable control over styles, allowing the user to make global changes to the appearance of the outline at particular level. It also permits the user to add notes to any row, which can be displayed in-line, (i.e. within the structure of the outline) or in a separate pane below the outline. OmniOutliner documents can incorporate multimedia elements, including images, audio, and video, as well as PDF documents and web links. In addition, OmniOutliner allows the addition of columns to the outline, so that the user can create rudimentary spreadsheets. There is limited support for summarizing columns, such as totaling or averaging, albeit not with anything close to the variety of functions provided in conventional spreadsheet software such as Microsoft Excel. OmniOutliner does not support cloning, a feature of some outliners that allows one topic to appear at more than one place in the outline. The Omni Group may, however, add this feature sometime in the future. Extensibility OmniOutliner supports scripting via AppleScript, and users have extended the software to export to iCal, Apple's calendaring software, and even the iPod. Import and export OmniOutliner's document format is proprietary, but it can export to OPML, HTML, DOCX, and several text and rich text formats. It can import from several other document formats (ACTA, MORE, Keynote, and Concurrence), as well as text and rich text formats. References O MacOS-only software Outliners